Search is not available for this dataset
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-12 06:26:38
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 422
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-12 06:25:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Nacho-Cola/recipe-10000-ko-recip-merge_2 | Nacho-Cola | "2024-11-08T20:27:04Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-11-08T20:22:22Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jerry02/whisper-tiny_to_canadian_accent_3 | Jerry02 | "2025-03-19T13:30:36Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:Canadian_english_3",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-03-19T13:13:10Z" | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- Canadian_english_3
metrics:
- wer
model-index:
- name: Whisper tiny Canadian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Canadian English
type: Canadian_english_3
args: 'config: default, split: test'
metrics:
- name: Wer
type: wer
value: 20.666142145292422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny Canadian
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Canadian English dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4307
- Wer: 20.6661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3163 | 1.0 | 1000 | 0.4459 | 21.0595 |
| 0.2335 | 2.0 | 2000 | 0.4307 | 20.6661 |
### Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
adhirajpandey/phi3-wbn | adhirajpandey | "2024-06-18T10:08:20Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-17T09:47:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aplnestrella/pegasus-samsum-18 | aplnestrella | "2023-01-18T03:51:20Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-01-18T02:34:56Z" | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum-18
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 18
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6508 | 0.61 | 500 | 1.4841 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
godofmining/shidou11 | godofmining | "2025-02-08T07:39:29Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-08T07:36:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DevQuasar/Salesforce.xLAM-7b-r-GGUF | DevQuasar | "2025-02-01T23:00:59Z" | 27 | 0 | null | [
"gguf",
"text-generation",
"base_model:Salesforce/xLAM-7b-r",
"base_model:quantized:Salesforce/xLAM-7b-r",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-09-29T16:39:47Z" | ---
base_model:
- Salesforce/xLAM-7b-r
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [Salesforce/xLAM-7b-r](https://huggingface.co/Salesforce/xLAM-7b-r)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
zipbomb/ppo-SnowballTarget | zipbomb | "2023-01-14T17:42:15Z" | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-01-14T17:42:08Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: zipbomb/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Gregorio1502/OnlineAAA | Gregorio1502 | "2025-04-04T01:51:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-09-13T03:00:15Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
amd/yolov5s | amd | "2024-01-26T08:29:01Z" | 0 | 2 | null | [
"onnx",
"RyzenAI",
"object-detection",
"vision",
"YOLO",
"Pytorch",
"dataset:COCO",
"license:apache-2.0",
"region:us"
] | object-detection | "2023-12-04T08:25:34Z" | ---
license: apache-2.0
tags:
- RyzenAI
- object-detection
- vision
- YOLO
- Pytorch
datasets:
- COCO
metrics:
- mAP
---
# YOLOv5s model trained on COCO
YOLOv5s is the small version of YOLOv5 model trained on COCO object detection (118k annotated images) at resolution 640x640. It was released in [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5).
We develop a modified version that could be supported by [AMD Ryzen AI](https://onnxruntime.ai/docs/execution-providers/Vitis-AI-ExecutionProvider.html).
## Model description
YOLOv5 🚀 is the world's most loved vision AI, representing Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=amd/yolov5) to look for all available YOLOv5 models.
## How to use
### Installation
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
Run the following script to install pre-requisites for this model.
```bash
pip install -r requirements.txt
```
### Data Preparation (optional: for accuracy evaluation)
The dataset MSCOCO2017 contains 118287 images for training and 5000 images for validation.
Download COCO dataset and create directories in your code like this:
```plain
└── datasets
└── coco
├── annotations
| ├── instances_val2017.json
| └── ...
├── labels
| ├── val2017
| | ├── 000000000139.txt
| ├── 000000000285.txt
| └── ...
├── images
| ├── val2017
| | ├── 000000000139.jpg
| ├── 000000000285.jpg
└── val2017.txt
```
1. put the val2017 image folder under images directory or use a softlink
2. the labels folder and val2017.txt above are generate by **general_json2yolo.py**
3. modify the coco.yaml like this:
```markdown
path: /path/to/your/datasets/coco # dataset root dir
train: train2017.txt # train images (relative to 'path') 118287 images
val: val2017.txt # val images (relative to 'path') 5000 images
```
### Test & Evaluation
- Code snippet from [`infer_onnx.py`](infer_onnx.py) on how to use
```python
args = make_parser().parse_args()
onnx_path = args.onnx_model
onnx_weight = onnxruntime.InferenceSession(onnx_path)
grid = np.load("./grid.npy", allow_pickle=True)
anchor_grid = np.load("./anchor_grid.npy", allow_pickle=True)
path = args.image_path
new_path = args.output_path
conf_thres, iou_thres, classes, agnostic_nms, max_det = 0.25, 0.45, None, False, 1000
img0 = cv2.imread(path)
img = pre_process(img0)
onnx_input = {onnx_weight.get_inputs()[0].name: img}
onnx_output = onnx_weight.run(None, onnx_input)
onnx_output = post_process(onnx_output)
pred = non_max_suppression(
onnx_output[0], conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det
)
colors = Colors()
det = pred[0]
im0 = img0.copy()
annotator = Annotator(im0, line_width=2, example=str(names))
if len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
# Write results
for *xyxy, conf, cls in reversed(det):
c = int(cls) # integer class
label = f"{names[c]} {conf:.2f}"
annotator.box_label(xyxy, label, color=colors(c, True))
# Stream results
im0 = annotator.result()
cv2.imwrite(new_path, im0)
```
- Run inference for a single image
```python
python infer_onnx.py --onnx_model ./yolov5s.onnx -i /Path/To/Your/Image --ipu --provider_config /Path/To/Your/Provider_config
```
*Note: __vaip_config.json__ is located at the setup package of Ryzen AI (refer to [Installation](#installation))*
- Test accuracy of the quantized model
```python
python eval_onnx.py --onnx_model ./yolov5s.onnx --ipu --provider_config /Path/To/Your/Provider_config
```
### Performance
|Metric |Accuracy on IPU|
| :----: | :----: |
|AP\@0.50:0.95|0.356|
```bibtex
@software{glenn_jocher_2021_5563715,
author = {Glenn Jocher et. al.},
title = {{ultralytics/yolov5: v6.0 - YOLOv5n 'Nano' models,
Roboflow integration, TensorFlow export, OpenCV
DNN support}},
month = oct,
year = 2021,
publisher = {Zenodo},
version = {v6.0},
doi = {10.5281/zenodo.5563715},
url = {https://doi.org/10.5281/zenodo.5563715}
}
```
|
osanseviero/ca_core_news_sm | osanseviero | "2022-07-07T21:23:31Z" | 7 | 0 | spacy | [
"spacy",
"token-classification",
"ca",
"license:gpl-3.0",
"model-index",
"region:us"
] | token-classification | "2022-07-07T21:22:35Z" | ---
tags:
- spacy
- token-classification
language:
- ca
license: gpl-3.0
model-index:
- name: ca_core_news_sm
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7934394284
- name: NER Recall
type: recall
value: 0.7903591071
- name: NER F Score
type: f_score
value: 0.7918962723
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9810266317
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9810266317
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9775079343
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.974386827
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9141207189
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.8816511663
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.990348055
---
### Details: https://spacy.io/models/ca#ca_core_news_sm
Catalan pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `ca_core_news_sm` |
| **Version** | `3.3.0` |
| **spaCy** | `>=3.3.0.dev0,<3.4.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Catalan AnCora v2.8](https://github.com/UniversalDependencies/UD_Catalan-AnCora) (Martínez Alonso, Héctor; Pascual, Elena; Zeman, Daniel)<br />[UD Catalan AnCora v2.8 + NER v3.2.8](https://github.com/TeMU-BSC/spacy/releases/tag/3.2.8) (Carlos Rodríguez-Penagos and Carme Armentano-Oller)<br />[Catalan Lemmatizer](https://github.com/explosion/spacy-lookups-data) (Text Mining Unit, Barcelona Supercomputing Center) |
| **License** | `GNU GPL 3.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (316 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumForm=Digit\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Comm`, `POS=AUX\|VerbForm=Inf`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Peri`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `POS=SCONJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `POS=SYM`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|Polarity=Neg`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Ind`, `POS=PUNCT`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Ind`, `POS=AUX`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=VERB`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `AdvType=Tim\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Semi`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Dash`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Colo`, `Gender=Masc\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Quot`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `POS=VERB`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PronType=Prs`, `POS=X`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Dem`, `POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `NumForm=Digit\|NumType=Ord\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=PRON\|PronType=Int`, `Foreign=Yes\|Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Foreign=Yes\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Comm`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Comm`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `AdvType=Tim\|Degree=Cmp\|POS=ADV`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Pre\|PronType=Prs`, `POS=DET\|PronType=Rel`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `POS=INTJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Foreign=Yes\|POS=SCONJ`, `Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=SYM`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=VERB`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|POS=DET`, `Foreign=Yes\|POS=ADV`, `POS=PUNCT\|PunctSide=Fin\|Punta d'aignctType=Brck`, `Degree=Cmp\|POS=ADJ`, `AdvType=Tim\|POS=SYM`, `Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:pass`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.97 |
| `TOKEN_P` | 99.78 |
| `TOKEN_R` | 99.79 |
| `TOKEN_F` | 99.79 |
| `POS_ACC` | 98.10 |
| `MORPH_ACC` | 97.75 |
| `MORPH_MICRO_P` | 99.37 |
| `MORPH_MICRO_R` | 98.67 |
| `MORPH_MICRO_F` | 99.02 |
| `SENTS_P` | 99.01 |
| `SENTS_R` | 99.06 |
| `SENTS_F` | 99.03 |
| `DEP_UAS` | 91.41 |
| `DEP_LAS` | 88.17 |
| `TAG_ACC` | 98.10 |
| `LEMMA_ACC` | 97.44 |
| `ENTS_P` | 79.34 |
| `ENTS_R` | 79.04 |
| `ENTS_F` | 79.19 | |
AriaFlare/your-repo-name | AriaFlare | "2025-03-03T09:10:03Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-02-24T11:17:45Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK dog
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - AriaFlare/your-repo-name
<Gallery />
## Model description
These are AriaFlare/your-repo-name LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](AriaFlare/your-repo-name/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
research-backup/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce | research-backup | "2022-09-19T18:52:42Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v2",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-08-20T09:19:41Z" | ---
datasets:
- relbert/semeval2012_relational_similarity_v2
model-index:
- name: relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8204761904761905
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6737967914438503
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6676557863501483
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7698721511951084
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.906
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6535087719298246
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6504629629629629
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9218020189844809
- name: F1 (macro)
type: f1_macro
value: 0.9191478903594151
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.857981220657277
- name: F1 (macro)
type: f1_macro
value: 0.6986679597011488
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6912242686890574
- name: F1 (macro)
type: f1_macro
value: 0.6803154613622127
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9585448981011337
- name: F1 (macro)
type: f1_macro
value: 0.8855707493953529
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9169539329363836
- name: F1 (macro)
type: f1_macro
value: 0.9153488154400948
---
# relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.6737967914438503
- Accuracy on SAT: 0.6676557863501483
- Accuracy on BATS: 0.7698721511951084
- Accuracy on U2: 0.6535087719298246
- Accuracy on U4: 0.6504629629629629
- Accuracy on Google: 0.906
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9218020189844809
- Micro F1 score on CogALexV: 0.857981220657277
- Micro F1 score on EVALution: 0.6912242686890574
- Micro F1 score on K&H+N: 0.9585448981011337
- Micro F1 score on ROOT09: 0.9169539329363836
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8204761904761905
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average_no_mask
- data: relbert/semeval2012_relational_similarity_v2
- template_mode: manual
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj>
- loss_function: nce_logout
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 29
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-d-nce/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
BigTMiami/n_par_bn_v_1_e_20_pre_adapter | BigTMiami | "2024-04-21T15:29:14Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"dataset:BigTMiami/amazon_MICRO_helpfulness_dataset_condensed",
"region:us"
] | null | "2024-04-21T15:28:55Z" | ---
tags:
- adapter-transformers
- roberta
datasets:
- BigTMiami/amazon_MICRO_helpfulness_dataset_condensed
---
# Adapter `BigTMiami/n_par_bn_v_1_e_20_pre_adapter` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_MICRO_helpfulness_dataset_condensed](https://huggingface.co/datasets/BigTMiami/amazon_MICRO_helpfulness_dataset_condensed/) dataset and includes a prediction head for masked lm.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("BigTMiami/n_par_bn_v_1_e_20_pre_adapter", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
aleegis12/d1322b4c-39c7-4a3c-9805-3386a25a880d | aleegis12 | "2025-02-03T12:54:10Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:unsloth/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-03T11:28:56Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d1322b4c-39c7-4a3c-9805-3386a25a880d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/OpenHermes-2.5-Mistral-7B
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 9c4378b501f71de8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c4378b501f71de8_train_data.json
type:
field_input: prompt
field_instruction: reason1
field_output: reason2
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/d1322b4c-39c7-4a3c-9805-3386a25a880d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/9c4378b501f71de8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 432ed5ae-dbea-46a8-8795-45618fe0369a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 432ed5ae-dbea-46a8-8795-45618fe0369a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d1322b4c-39c7-4a3c-9805-3386a25a880d
This model is a fine-tuned version of [unsloth/OpenHermes-2.5-Mistral-7B](https://huggingface.co/unsloth/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7904 | 0.0002 | 1 | 1.5057 |
| 2.8836 | 0.0088 | 50 | 0.7961 |
| 2.4347 | 0.0177 | 100 | 0.7386 |
| 2.4605 | 0.0265 | 150 | 0.6821 |
| 2.665 | 0.0354 | 200 | 0.6430 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rizvi-rahil786/electra-bert-base-canadaWildfire | rizvi-rahil786 | "2024-03-13T17:40:22Z" | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:bhadresh-savani/electra-base-emotion",
"base_model:finetune:bhadresh-savani/electra-base-emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-13T17:24:27Z" | ---
license: apache-2.0
base_model: bhadresh-savani/electra-base-emotion
tags:
- generated_from_trainer
model-index:
- name: electra-bert-base-canadaWildfire
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-bert-base-canadaWildfire
This model is a fine-tuned version of [bhadresh-savani/electra-base-emotion](https://huggingface.co/bhadresh-savani/electra-base-emotion) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6815 | 1.0 | 3008 | 0.5991 |
| 0.4478 | 2.0 | 6016 | 0.4474 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
LarryAIDraw/sparklev1 | LarryAIDraw | "2024-02-19T16:05:07Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-02-19T15:35:53Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/309302?modelVersionId=350612 |
Ellbendls/Qwen-2.5-3b-Quran | Ellbendls | "2024-11-27T13:41:22Z" | 110 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"id",
"dataset:emhaihsan/quran-indonesia-tafseer-translation",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-27T12:38:54Z" | ---
library_name: transformers
license: mit
datasets:
- emhaihsan/quran-indonesia-tafseer-translation
language:
- id
base_model:
- Qwen/Qwen2.5-3B-Instruct
---
# Model Card for Fine-Tuned Qwen2.5-3B-Instruct
This is a fine-tuned version of the [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) model. The fine-tuning process utilized the [Quran Indonesia Tafseer Translation](https://huggingface.co/datasets/emhaihsan/quran-indonesia-tafseer-translation) dataset, which provides translations and tafsir in Bahasa Indonesia for the Quran.
## Model Details
### Model Description
- **Base Model:** [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
- **Fine-Tuned By:** Ellbendl Satria
- **Dataset:** [emhaihsan/quran-indonesia-tafseer-translation](https://huggingface.co/datasets/emhaihsan/quran-indonesia-tafseer-translation)
- **Language:** Bahasa Indonesia
- **License:** MIT
This model is designed for NLP tasks involving Quranic text in Bahasa Indonesia, including understanding translations and tafsir.
## Uses
### Direct Use
This model can be used for applications requiring the understanding, summarization, or retrieval of Quranic translations and tafsir in Bahasa Indonesia.
### Downstream Use
It is suitable for fine-tuning on tasks such as:
- Quranic text summarization
- Question answering systems related to Islamic knowledge
- Educational tools for learning Quranic content in Indonesian
### Biases
- The model inherits any biases present in the dataset, which is specific to Islamic translations and tafsir in Bahasa Indonesia.
### Recommendations
- Users should ensure that applications using this model respect cultural and religious sensitivities.
- Results should be verified by domain experts for critical applications.
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Ellbendls/Qwen-2.5-3b-Quran")
model = AutoModelForCausalLM.from_pretrained("Ellbendls/Qwen-2.5-3b-Quran")
# Move the model to GPU
model.to("cuda")
# Define the input message
messages = [
{
"role": "user",
"content": "Tafsirkan ayat ini اِهْدِنَا الصِّرَاطَ الْمُسْتَقِيْمَۙ"
}
]
# Generate the prompt using the tokenizer
prompt = tokenizer.apply_chat_template(messages, tokenize=False,
add_generation_prompt=True)
# Tokenize the prompt and move inputs to GPU
inputs = tokenizer(prompt, return_tensors='pt', padding=True,
truncation=True).to("cuda")
# Generate the output using the model
outputs = model.generate(**inputs, max_length=150,
num_return_sequences=1)
# Decode the output
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Print the result
print(text.split("assistant")[1])
``` |
Sreekanth3096/vit-coco-image-classification | Sreekanth3096 | "2024-07-09T07:39:45Z" | 252 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"vit",
"image-classification",
"vision",
"en",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-09T06:59:45Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
- vit
datasets:
- imagenet-1k
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
language:
- en
library_name: transformers
pipeline_tag: image-classification
---
# Model Overviwe:
The Vision Transformer (ViT) is a transformer encoder model designed for image recognition tasks. It was pretrained on a large dataset of 14 million images and 21,843 classes known as ImageNet-21k, and fine-tuned on ImageNet 2012, which consists of 1 million images across 1,000 classes.
# How It Works:
Input Representation: Images are split into fixed-size patches (16x16 pixels) and linearly embedded. A special [CLS] token is added at the beginning of the sequence to indicate the image's classification.
Transformer Encoder: The model uses a transformer encoder architecture, similar to BERT for text, to process the image patches. Absolute position embeddings are added to encode spatial information before inputting the sequence into transformer layers.
Classification: After processing through the transformer layers, the output from the [CLS] token is used for image classification. This token's final hidden state represents the entire image's features.
# Intended Uses:
Image Classification: ViT can be directly used for image classification tasks. By adding a linear layer on top of the [CLS] token, the model can classify images into one of the 1,000 ImageNet classes.
Limitations:
Resolution Dependency: While the model was fine-tuned on ImageNet at 224x224 resolution, better performance is achieved with higher resolutions such as 384x384. Larger models generally yield better results but require more computational resources.
Training Details:
Preprocessing: Images are resized to 224x224 pixels and normalized across RGB channels.
Training: Pretraining was conducted on TPUv3 hardware with a batch size of 4096 and learning rate warmup. Gradient clipping was applied during training to enhance stability.
```python
from transformers import ViTImageProcessor, ViTForImageClassification
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
import torch
def predict_image_from_url(url):
# Load image from URL
image = Image.open(requests.get(url, stream=True).raw)
# Initialize Sreekanth's processor and model
processor = AutoImageProcessor.from_pretrained("Sreekanth3096/vit-coco-image-classification")
model = AutoModelForImageClassification.from_pretrained("Sreekanth3096/vit-coco-image-classification")
# Preprocess image and make predictions
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# Get predicted class label
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
predicted_class = model.config.id2label[predicted_class_idx]
return predicted_class
# Example usage
if __name__ == "__main__":
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
predicted_class = predict_image_from_url(url)
print(f"Predicted class: {predicted_class}")
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
# Evaluation Results:
Performance: Detailed evaluation results on various benchmarks can be found in tables from the original paper. Fine-tuning the model on higher resolutions typically improves classification accuracy. |
Inespinoza/PPO-LunarLander-V0 | Inespinoza | "2023-08-16T14:48:00Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-16T14:47:40Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.57 +/- 19.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Fred99774/valendra | Fred99774 | "2023-02-17T03:25:01Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-02-17T03:21:50Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### valendra Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
zigss/Morty | zigss | "2024-06-01T02:11:08Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:coreml-community/coreml-toonYou-beta5pruned_cn",
"base_model:adapter:coreml-community/coreml-toonYou-beta5pruned_cn",
"license:cc-by-nc-nd-4.0",
"region:us"
] | text-to-image | "2024-06-01T01:47:09Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: morty_smith's, photo of face, looking curious, frowning eyeshadow
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (1).jpeg
- text: morty_smith, waist-up shot, looking up, astronaut suit
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (5).jpeg
- text: Photo of morty_smith waist up looking forward with face closed
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (20).jpeg
- text: Photo of morty_smith waist up looking down with face closed
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (36).jpeg
- text: >-
Photo of morty_smith waist up looking ahead with closed face and wearing
black dress clothes with red tie
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (38).jpeg
- text: >-
Photo of morty_smith waist up looking down with closed face and wearing
black dress clothes with red tie
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (39).jpeg
- text: >-
Waist up shot of morty_smith taking a self in the mirror giving a forced
smile
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (41).jpeg
- text: Photo of morty_smith's upset face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (42).jpeg
- text: Photo of morty_smith's upset face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (43).jpeg
- text: Photo of morty_smith's Angry face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (44).jpeg
- text: Photo of morty_smith's upset face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (45).jpeg
- text: Photo of morty_smith's Cheerful face, open mouth, wide smile
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (46).jpeg
- text: Photo of morty_smith's face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (47).jpeg
- text: Photo of morty_smith's face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (48).jpeg
- text: Photo of morty_smith's sleeping face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (49).jpeg
- text: Photo of morty_smith's happy face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (51).jpeg
- text: Photo of morty_smith's Cheerful face, open mouth, wide smile
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (52).jpeg
- text: Photo of morty_smith's Sad face with tears in eyes
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (53).jpeg
- text: Photo of the waist up of the morty_smith curious face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (54).jpeg
- text: Photo of morty_smith's upset face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (55).jpeg
- text: Photo of morty_smith's face, Looking down, focused
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (56).jpeg
- text: Photo of morty_smith's embarrassed face, shy
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (57).jpeg
- text: Photo of morty_smith's bored face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (58).jpeg
- text: Photo of morty_smith's nervous, anxious face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (59).jpeg
- text: Photo of morty_smith's face of fear, disgust
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (60).jpeg
- text: Photo of morty_smith's bored face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (61).jpeg
- text: Photo of morty_smith's Passionate face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (62).jpeg
- text: Photo of morty_smith's face of disappointment, sadness, regret
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (65).jpeg
- text: Photo of morty_smith's face of despair, screaming, mouth open, eyes wide
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (66).jpeg
- text: Photo of morty_smith's bored face, studying, taking a test at school
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (67).jpeg
- text: Waist-up photo of morty_smith showing off her skills with a wink
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (68).jpeg
- text: Photo of morty_smith's angry, enraged face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (69).jpeg
- text: Waist up photo serious face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (70).jpeg
- text: Photo of morty_smith's relief face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (71).jpeg
- text: Photo of morty_smith's face of happiness, smile, content
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (72).jpeg
- text: Photo of morty_smith's negotiation expression
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (74).jpeg
- text: Photo of morty_smith's thoughtful, traumatized face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (1).jpg
- text: >-
Photo from the waist up of the morty_smith face of someone who has done
something wrong
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (2).jpg
- text: Photo of morty_smith's sad face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (4).jpg
- text: Photo of Morty_smith excited, receptive, cheerful
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (6).jpg
- text: Photo of Morty_smith afraid, desperate
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (7).jpg
- text: Lost morty_smith full body photo
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (8).jpg
- text: morty_smith face photo, scared, agony, fear
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (9).jpg
- text: 'morty_smith face photo, angry, space ship background '
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (11).jpg
- text: morty_smith waist up photo, angry, fighting aliens
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (12).jpg
- text: morty_smith headshot, disembarrassed face, regretful
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (13).jpg
- text: guilty morty_smith full body photo
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (14).jpg
- text: happy morty_smith full body photo
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (15).jpg
- text: tired morty_smith full body photo
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (16).jpg
- text: >-
Photo from the waist up of the morty_smith face of someone who has done
something wrong
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (20).jpg
- text: morty_smith alone in a spaceship
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (22).jpg
- text: Photo of morty_smith's face
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (23).jpg
- text: >-
Photo from the waist up of the morty_smith face of someone who has done
something wrong
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (24).jpg
- text: morty_smith Headshot, Stubborn Face, Suspicious
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (25).jpg
- text: morty_smith headshot, charismatic face, receptive
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (26).jpg
- text: >-
morty_smith photo from the waist up, giving a speech, mouth open, talking
loudly
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (27).jpg
- text: >-
morty_smith profile picture from the waist up, threatening, eyes fixed on
the target
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (28).jpg
- text: morty_smith photo of face, shocked, hands over mouth
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (30).jpg
- text: morty_smith, waist up photo, scared, mouth open, eyes wide
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (1).png
- text: morty_smith, waist up photo, shy, anxious
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (3).png
- text: scared morty_smith full body photo
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (4).png
- text: morty_smith full body photo, Running, fear, screaming, mouth open, eyes wide
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (5).png
- text: Photo of morty_smith's face, talking angrily, arguing fiercely
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (7).png
- text: >-
morty_smith picture of face, trying to hold back tears, face of anger and
dissatisfaction, tears in eyes
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (8).png
- text: Photo of morty_smith's embarrassed face, wavy mouth, frazied eyebrows
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (9).png
- text: Photo of the morty_smith face of someone who has done something wrong
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (10).png
- text: morty_smith Full body profile picture, Running, fear
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (1).webp
- text: morty_smith waist up photo of curious
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (2).webp
- text: morty_smith Waist up photo face of someone who has done something wrong
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (3).webp
- text: >-
Morty Smith, Waist Up Photo, Incredulous, Dissatisfied, Anger, Sadness,
Freaking Out
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (4).webp
- text: morty_smith, photo from waist up, sad, depression
parameters:
negative_prompt: (worst quality, low quality, letterboxed)
output:
url: images/ytrom (5).webp
base_model: coreml-community/coreml-toonYou-beta5pruned_cn
instance_prompt: morty_smith
license: cc-by-nc-nd-4.0
---
# Morty0.1
<Gallery />
## Model description
Morty model
## Trigger words
You should use `morty_smith` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/zigss/Morty/tree/main) them in the Files & versions tab.
|
lmazzon70/videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-sample8 | lmazzon70 | "2023-01-10T19:22:54Z" | 62 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2023-01-10T11:26:16Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-sample8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-short-finetuned-ssv2-finetuned-rwf2000-epochs8-sample8
This model is a fine-tuned version of [MCG-NJU/videomae-base-short-finetuned-ssv2](https://huggingface.co/MCG-NJU/videomae-base-short-finetuned-ssv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2493
- Accuracy: 0.3857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 6400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6783 | 0.12 | 800 | 0.5823 | 0.8175 |
| 0.7397 | 1.12 | 1600 | 2.2365 | 0.5475 |
| 0.206 | 2.12 | 2400 | 1.4244 | 0.6375 |
| 0.0431 | 3.12 | 3200 | 0.9144 | 0.7525 |
| 0.0033 | 4.12 | 4000 | 0.7622 | 0.825 |
| 0.0011 | 5.12 | 4800 | 1.0658 | 0.775 |
| 0.001 | 6.12 | 5600 | 1.6892 | 0.6875 |
| 0.2392 | 7.12 | 6400 | 1.1574 | 0.7825 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
rootacess/distilbert-base-uncased-finetuned-mathQA | rootacess | "2023-03-23T11:14:46Z" | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-06T13:29:55Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-mathQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mathQA
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0752
- Accuracy: 0.9857
- F1: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3155 | 1.0 | 1865 | 0.0997 | 0.9727 | 0.9727 |
| 0.0726 | 2.0 | 3730 | 0.0813 | 0.9826 | 0.9825 |
| 0.0292 | 3.0 | 5595 | 0.0752 | 0.9857 | 0.9857 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
farrosalferro24/Gecko-Mantis-8B-siglip-llama3 | farrosalferro24 | "2024-08-13T11:32:24Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gecko",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-08-13T10:14:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedfarag/category_model | ahmedfarag | "2025-02-04T07:52:07Z" | 0 | 0 | null | [
"joblib",
"license:apache-2.0",
"region:us"
] | null | "2024-06-19T05:26:03Z" | ---
license: apache-2.0
---
|
Naxcybruck/Rosaloraaa | Naxcybruck | "2023-05-21T01:57:58Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-05-21T01:50:10Z" | ---
license: creativeml-openrail-m
---
|
nikbull/firstmodel | nikbull | "2025-02-12T08:57:03Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-12T06:42:44Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ChPresXiJi
---
# Firstmodel
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ChPresXiJi` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nikbull/firstmodel', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
armanbb/layoutlmv3-finetuned-full | armanbb | "2025-02-27T19:37:31Z" | 28 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-large",
"base_model:quantized:microsoft/layoutlmv3-large",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-02-22T20:58:23Z" | ---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-full
This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0613
- Precision: 0.9339
- Recall: 0.9517
- F1: 0.9427
- Accuracy: 0.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 6
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.5201 | 250 | 0.3041 | 0.4864 | 0.5643 | 0.5225 | 0.9219 |
| 0.4848 | 1.0416 | 500 | 0.1620 | 0.7495 | 0.8031 | 0.7753 | 0.9652 |
| 0.4848 | 1.5617 | 750 | 0.1195 | 0.8386 | 0.8662 | 0.8522 | 0.9745 |
| 0.1555 | 2.0832 | 1000 | 0.0996 | 0.8764 | 0.9025 | 0.8892 | 0.9790 |
| 0.1555 | 2.6033 | 1250 | 0.0765 | 0.8984 | 0.9285 | 0.9132 | 0.9828 |
| 0.0941 | 3.1248 | 1500 | 0.0662 | 0.9207 | 0.9387 | 0.9296 | 0.9864 |
| 0.0941 | 3.6449 | 1750 | 0.0658 | 0.9361 | 0.9452 | 0.9406 | 0.9875 |
| 0.0643 | 4.1664 | 2000 | 0.0630 | 0.9317 | 0.9508 | 0.9411 | 0.9886 |
| 0.0643 | 4.6865 | 2250 | 0.0589 | 0.9338 | 0.9503 | 0.9420 | 0.9892 |
| 0.0503 | 5.2080 | 2500 | 0.0613 | 0.9339 | 0.9517 | 0.9427 | 0.9888 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
mradermacher/jessi-v0.1-virtuoso-small-GGUF | mradermacher | "2025-04-06T21:00:16Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:neopolita/jessi-v0.1-virtuoso-small",
"base_model:quantized:neopolita/jessi-v0.1-virtuoso-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-06T17:26:04Z" | ---
base_model: neopolita/jessi-v0.1-virtuoso-small
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/neopolita/jessi-v0.1-virtuoso-small
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/jessi-v0.1-virtuoso-small-GGUF/resolve/main/jessi-v0.1-virtuoso-small.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
tensorblock/llama2-13b-v1-GGUF | tensorblock | "2024-11-24T12:46:55Z" | 6 | 0 | null | [
"gguf",
"TensorBlock",
"GGUF",
"base_model:jjourney1125/llama2-13b-v1",
"base_model:quantized:jjourney1125/llama2-13b-v1",
"endpoints_compatible",
"region:us"
] | null | "2024-11-24T11:44:49Z" | ---
base_model: jjourney1125/llama2-13b-v1
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## jjourney1125/llama2-13b-v1 - GGUF
This repo contains GGUF format model files for [jjourney1125/llama2-13b-v1](https://huggingface.co/jjourney1125/llama2-13b-v1).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llama2-13b-v1-Q2_K.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q2_K.gguf) | Q2_K | 4.521 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama2-13b-v1-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q3_K_S.gguf) | Q3_K_S | 5.270 GB | very small, high quality loss |
| [llama2-13b-v1-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q3_K_M.gguf) | Q3_K_M | 5.903 GB | very small, high quality loss |
| [llama2-13b-v1-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q3_K_L.gguf) | Q3_K_L | 6.454 GB | small, substantial quality loss |
| [llama2-13b-v1-Q4_0.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q4_0.gguf) | Q4_0 | 6.860 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama2-13b-v1-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q4_K_S.gguf) | Q4_K_S | 6.913 GB | small, greater quality loss |
| [llama2-13b-v1-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q4_K_M.gguf) | Q4_K_M | 7.326 GB | medium, balanced quality - recommended |
| [llama2-13b-v1-Q5_0.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q5_0.gguf) | Q5_0 | 8.356 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama2-13b-v1-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q5_K_S.gguf) | Q5_K_S | 8.356 GB | large, low quality loss - recommended |
| [llama2-13b-v1-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q5_K_M.gguf) | Q5_K_M | 8.596 GB | large, very low quality loss - recommended |
| [llama2-13b-v1-Q6_K.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q6_K.gguf) | Q6_K | 9.946 GB | very large, extremely low quality loss |
| [llama2-13b-v1-Q8_0.gguf](https://huggingface.co/tensorblock/llama2-13b-v1-GGUF/blob/main/llama2-13b-v1-Q8_0.gguf) | Q8_0 | 12.881 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/llama2-13b-v1-GGUF --include "llama2-13b-v1-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/llama2-13b-v1-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Triangle104/Qwen2.5-7B-Q4_K_S-GGUF | Triangle104 | "2024-09-19T15:56:47Z" | 6 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-09-19T15:56:28Z" | ---
base_model: Qwen/Qwen2.5-7B
language:
- en
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-7B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-7B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-7B`](https://huggingface.co/Qwen/Qwen2.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-7B-Q4_K_S-GGUF --hf-file qwen2.5-7b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-7B-Q4_K_S-GGUF --hf-file qwen2.5-7b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Q4_K_S-GGUF --hf-file qwen2.5-7b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-7B-Q4_K_S-GGUF --hf-file qwen2.5-7b-q4_k_s.gguf -c 2048
```
|
hannaherlebach/PPO-LunarLander-v2 | hannaherlebach | "2023-10-30T10:53:59Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-30T10:53:41Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.41 +/- 24.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sarveshj/BERT_ep8_lr5 | Sarveshj | "2025-01-31T09:26:47Z" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-04-09T10:35:48Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_ep8_lr5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_ep8_lr5
This model is a fine-tuned version of [ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT](https://huggingface.co/ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2950
- Precision: 0.6748
- Recall: 0.6332
- F1: 0.6534
- Accuracy: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-09
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 467 | 0.3067 | 0.6768 | 0.6258 | 0.6503 | 0.9415 |
| 0.2941 | 2.0 | 934 | 0.3029 | 0.6753 | 0.6283 | 0.6510 | 0.9417 |
| 0.2874 | 3.0 | 1401 | 0.2999 | 0.6764 | 0.6302 | 0.6525 | 0.9418 |
| 0.2821 | 4.0 | 1868 | 0.2978 | 0.6761 | 0.6316 | 0.6531 | 0.9420 |
| 0.2828 | 5.0 | 2335 | 0.2963 | 0.6749 | 0.6321 | 0.6528 | 0.9421 |
| 0.2829 | 6.0 | 2802 | 0.2954 | 0.6748 | 0.6332 | 0.6534 | 0.9421 |
| 0.2808 | 7.0 | 3269 | 0.2951 | 0.6750 | 0.6332 | 0.6535 | 0.9421 |
| 0.2841 | 8.0 | 3736 | 0.2950 | 0.6748 | 0.6332 | 0.6534 | 0.9420 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
genki10/ASAP_FineTuningBERT_AugV9_k1_task1_organization_k1_k1_fold0 | genki10 | "2025-02-12T02:25:02Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-11T21:53:36Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV9_k1_task1_organization_k1_k1_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV9_k1_task1_organization_k1_k1_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7676
- Qwk: 0.5122
- Mse: 0.7676
- Rmse: 0.8761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 2 | 9.3762 | 0.0018 | 9.3762 | 3.0621 |
| No log | 2.0 | 4 | 8.9574 | 0.0 | 8.9574 | 2.9929 |
| No log | 3.0 | 6 | 8.6544 | 0.0 | 8.6544 | 2.9418 |
| No log | 4.0 | 8 | 8.3386 | 0.0 | 8.3386 | 2.8877 |
| No log | 5.0 | 10 | 7.9751 | 0.0 | 7.9751 | 2.8240 |
| No log | 6.0 | 12 | 7.5973 | 0.0 | 7.5973 | 2.7563 |
| No log | 7.0 | 14 | 7.2405 | 0.0 | 7.2405 | 2.6908 |
| No log | 8.0 | 16 | 6.7876 | 0.0 | 6.7876 | 2.6053 |
| No log | 9.0 | 18 | 5.8647 | 0.0111 | 5.8647 | 2.4217 |
| No log | 10.0 | 20 | 4.6519 | 0.0039 | 4.6519 | 2.1568 |
| No log | 11.0 | 22 | 3.7156 | 0.0 | 3.7156 | 1.9276 |
| No log | 12.0 | 24 | 3.3637 | 0.0 | 3.3637 | 1.8340 |
| No log | 13.0 | 26 | 2.4646 | 0.1257 | 2.4646 | 1.5699 |
| No log | 14.0 | 28 | 2.0484 | 0.0627 | 2.0484 | 1.4312 |
| No log | 15.0 | 30 | 1.6931 | 0.0601 | 1.6931 | 1.3012 |
| No log | 16.0 | 32 | 1.3521 | 0.0484 | 1.3521 | 1.1628 |
| No log | 17.0 | 34 | 1.1351 | 0.0316 | 1.1351 | 1.0654 |
| No log | 18.0 | 36 | 0.9063 | 0.0419 | 0.9063 | 0.9520 |
| No log | 19.0 | 38 | 1.1250 | 0.0419 | 1.1250 | 1.0606 |
| No log | 20.0 | 40 | 0.9164 | 0.1395 | 0.9164 | 0.9573 |
| No log | 21.0 | 42 | 0.7017 | 0.3950 | 0.7017 | 0.8377 |
| No log | 22.0 | 44 | 0.6914 | 0.3987 | 0.6914 | 0.8315 |
| No log | 23.0 | 46 | 1.1066 | 0.1868 | 1.1066 | 1.0519 |
| No log | 24.0 | 48 | 0.9961 | 0.2906 | 0.9961 | 0.9980 |
| No log | 25.0 | 50 | 0.6026 | 0.4108 | 0.6026 | 0.7763 |
| No log | 26.0 | 52 | 0.6133 | 0.3921 | 0.6133 | 0.7832 |
| No log | 27.0 | 54 | 0.7983 | 0.4200 | 0.7983 | 0.8935 |
| No log | 28.0 | 56 | 0.9119 | 0.3746 | 0.9119 | 0.9549 |
| No log | 29.0 | 58 | 0.6392 | 0.4240 | 0.6392 | 0.7995 |
| No log | 30.0 | 60 | 0.6577 | 0.4756 | 0.6577 | 0.8110 |
| No log | 31.0 | 62 | 0.6714 | 0.4590 | 0.6714 | 0.8194 |
| No log | 32.0 | 64 | 0.8080 | 0.4147 | 0.8080 | 0.8989 |
| No log | 33.0 | 66 | 0.6988 | 0.4577 | 0.6988 | 0.8359 |
| No log | 34.0 | 68 | 0.7070 | 0.4926 | 0.7070 | 0.8408 |
| No log | 35.0 | 70 | 0.7754 | 0.4235 | 0.7754 | 0.8806 |
| No log | 36.0 | 72 | 0.7610 | 0.4632 | 0.7610 | 0.8724 |
| No log | 37.0 | 74 | 0.7793 | 0.4904 | 0.7793 | 0.8828 |
| No log | 38.0 | 76 | 0.8151 | 0.4844 | 0.8151 | 0.9028 |
| No log | 39.0 | 78 | 0.8369 | 0.4480 | 0.8369 | 0.9148 |
| No log | 40.0 | 80 | 0.8190 | 0.4454 | 0.8190 | 0.9050 |
| No log | 41.0 | 82 | 0.7869 | 0.5029 | 0.7869 | 0.8871 |
| No log | 42.0 | 84 | 0.7472 | 0.4965 | 0.7472 | 0.8644 |
| No log | 43.0 | 86 | 0.8162 | 0.4424 | 0.8162 | 0.9034 |
| No log | 44.0 | 88 | 0.7285 | 0.4984 | 0.7285 | 0.8535 |
| No log | 45.0 | 90 | 0.7463 | 0.5184 | 0.7463 | 0.8639 |
| No log | 46.0 | 92 | 0.8228 | 0.4446 | 0.8228 | 0.9071 |
| No log | 47.0 | 94 | 0.8118 | 0.4957 | 0.8118 | 0.9010 |
| No log | 48.0 | 96 | 0.8629 | 0.5196 | 0.8629 | 0.9289 |
| No log | 49.0 | 98 | 0.8502 | 0.5096 | 0.8502 | 0.9221 |
| No log | 50.0 | 100 | 0.8488 | 0.5033 | 0.8488 | 0.9213 |
| No log | 51.0 | 102 | 0.9401 | 0.5136 | 0.9401 | 0.9696 |
| No log | 52.0 | 104 | 0.8770 | 0.5262 | 0.8770 | 0.9365 |
| No log | 53.0 | 106 | 0.8367 | 0.4608 | 0.8367 | 0.9147 |
| No log | 54.0 | 108 | 0.8385 | 0.4553 | 0.8385 | 0.9157 |
| No log | 55.0 | 110 | 0.8027 | 0.4916 | 0.8027 | 0.8959 |
| No log | 56.0 | 112 | 0.8692 | 0.5031 | 0.8692 | 0.9323 |
| No log | 57.0 | 114 | 0.7701 | 0.5227 | 0.7701 | 0.8775 |
| No log | 58.0 | 116 | 0.8111 | 0.4490 | 0.8111 | 0.9006 |
| No log | 59.0 | 118 | 0.8864 | 0.4427 | 0.8864 | 0.9415 |
| No log | 60.0 | 120 | 0.7776 | 0.4975 | 0.7776 | 0.8818 |
| No log | 61.0 | 122 | 0.8034 | 0.5340 | 0.8034 | 0.8963 |
| No log | 62.0 | 124 | 0.7789 | 0.5066 | 0.7789 | 0.8826 |
| No log | 63.0 | 126 | 0.8550 | 0.4566 | 0.8550 | 0.9247 |
| No log | 64.0 | 128 | 0.8781 | 0.4405 | 0.8781 | 0.9371 |
| No log | 65.0 | 130 | 0.7564 | 0.4843 | 0.7564 | 0.8697 |
| No log | 66.0 | 132 | 0.7679 | 0.5396 | 0.7679 | 0.8763 |
| No log | 67.0 | 134 | 0.7368 | 0.5394 | 0.7368 | 0.8583 |
| No log | 68.0 | 136 | 0.6931 | 0.5106 | 0.6931 | 0.8325 |
| No log | 69.0 | 138 | 0.7246 | 0.4959 | 0.7246 | 0.8513 |
| No log | 70.0 | 140 | 0.7192 | 0.5058 | 0.7192 | 0.8480 |
| No log | 71.0 | 142 | 0.7324 | 0.5329 | 0.7324 | 0.8558 |
| No log | 72.0 | 144 | 0.7496 | 0.5441 | 0.7496 | 0.8658 |
| No log | 73.0 | 146 | 0.7423 | 0.5110 | 0.7423 | 0.8616 |
| No log | 74.0 | 148 | 0.7467 | 0.4810 | 0.7467 | 0.8641 |
| No log | 75.0 | 150 | 0.7319 | 0.4832 | 0.7319 | 0.8555 |
| No log | 76.0 | 152 | 0.7416 | 0.5273 | 0.7416 | 0.8611 |
| No log | 77.0 | 154 | 0.7594 | 0.5490 | 0.7594 | 0.8714 |
| No log | 78.0 | 156 | 0.7412 | 0.5086 | 0.7412 | 0.8609 |
| No log | 79.0 | 158 | 0.7517 | 0.4850 | 0.7517 | 0.8670 |
| No log | 80.0 | 160 | 0.7693 | 0.4927 | 0.7693 | 0.8771 |
| No log | 81.0 | 162 | 0.7458 | 0.4936 | 0.7458 | 0.8636 |
| No log | 82.0 | 164 | 0.7454 | 0.5152 | 0.7454 | 0.8634 |
| No log | 83.0 | 166 | 0.7478 | 0.5182 | 0.7478 | 0.8647 |
| No log | 84.0 | 168 | 0.7430 | 0.5181 | 0.7430 | 0.8620 |
| No log | 85.0 | 170 | 0.7514 | 0.5002 | 0.7514 | 0.8668 |
| No log | 86.0 | 172 | 0.7601 | 0.4991 | 0.7601 | 0.8718 |
| No log | 87.0 | 174 | 0.7686 | 0.4987 | 0.7686 | 0.8767 |
| No log | 88.0 | 176 | 0.7783 | 0.5152 | 0.7783 | 0.8822 |
| No log | 89.0 | 178 | 0.7830 | 0.5116 | 0.7830 | 0.8849 |
| No log | 90.0 | 180 | 0.7840 | 0.5005 | 0.7840 | 0.8854 |
| No log | 91.0 | 182 | 0.7855 | 0.4984 | 0.7855 | 0.8863 |
| No log | 92.0 | 184 | 0.7814 | 0.5050 | 0.7814 | 0.8839 |
| No log | 93.0 | 186 | 0.7824 | 0.5076 | 0.7824 | 0.8846 |
| No log | 94.0 | 188 | 0.7840 | 0.5163 | 0.7840 | 0.8854 |
| No log | 95.0 | 190 | 0.7789 | 0.5189 | 0.7789 | 0.8825 |
| No log | 96.0 | 192 | 0.7722 | 0.5197 | 0.7722 | 0.8787 |
| No log | 97.0 | 194 | 0.7676 | 0.5122 | 0.7676 | 0.8761 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
KingKazma/cnn_dailymail_gpt2_prefix_tuning_500_10_3000_8_e8_s108_v4_l55_v20_extra | KingKazma | "2023-09-14T18:26:35Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-14T18:26:33Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-CODE-Python_16bit | Agnuxo | "2024-08-27T10:28:53Z" | 89 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit",
"base_model:finetune:Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-26T21:11:28Z" | ---
base_model: Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** Agnuxo
- **License:** apache-2.0
- **Finetuned from model :** Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## How the MOE System Works
This model is a core component of a larger Multi-Expert Question Answering System. Here's a breakdown of the system's functionality:
1. **Model Loading:** The system loads the "director" LLM and keeps other expert LLMs (e.g., for programming, biology, mathematics) ready for use.
2. **Expert Routing:** When a user asks a question, the system either:
- Uses keyword matching to identify the relevant domain.
- Consults the director LLM to classify the question's category.
3. **Dynamic Expert Loading:** The system loads the chosen expert LLM into memory, optimizing resource usage by releasing any previously active expert.
4. **Response Generation:** The selected expert LLM receives the question and generates a tailored answer.
5. **Chat Interface:** A user-friendly chat interface facilitates interaction with the MOE system.
This MOE approach enhances efficiency and accuracy compared to relying on a single, general-purpose LLM.
Repository and Additional Information
Full Code: https://huggingface.co/Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit/resolve/main/MOE-LLMs3.py
GitHub Repository: https://github.com/Agnuxo1/NEBULA
## Code Example
The following code demonstrates the implementation of the Multi-Expert Question Answering System:
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
MODEL_CONFIG = {
"director": {
"name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit",
"task": "text-generation",
},
"programming": {
"name": "Qwen/Qwen2-1.5B-Instruct",
"task": "text-generation",
},
"biology": {
"name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant_16bit",
"task": "text-generation",
},
"mathematics": {
"name": "Qwen/Qwen2-Math-1.5B-Instruct",
"task": "text-generation",
}
}
KEYWORDS = {
"biology": ["cell", "DNA", "protein", "evolution", "genetics", "ecosystem", "organism", "metabolism", "photosynthesis", "microbiology", "célula", "ADN", "proteína", "evolución", "genética", "ecosistema", "organismo", "metabolismo", "fotosíntesis", "microbiología"],
"mathematics": ["Math" "mathematics", "equation", "integral", "derivative", "function", "geometry", "algebra", "statistics", "probability", "ecuación", "integral", "derivada", "función", "geometría", "álgebra", "estadística", "probabilidad"],
"programming": ["python", "java", "C++", "HTML", "scrip", "code", "Dataset", "API", "framework", "debugging", "algorithm", "compiler", "database", "CSS", "JSON", "XML", "encryption", "IDE", "repository", "Git", "version control", "front-end", "back-end", "API", "stack trace", "REST", "machine learning"]
}
class MOELLM:
def __init__(self):
self.current_expert = None
self.current_model = None
self.current_tokenizer = None
self.device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {self.device}")
self.load_director_model()
def load_director_model(self):
"""Loads the director model."""
print("Loading director model...")
model_name = MODEL_CONFIG["director"]["name"]
self.director_tokenizer = AutoTokenizer.from_pretrained(model_name)
self.director_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(self.device)
self.director_pipeline = pipeline(
MODEL_CONFIG["director"]["task"],
model=self.director_model,
tokenizer=self.director_tokenizer,
device=self.device
)
print("Director model loaded.")
def load_expert_model(self, expert):
"""Dynamically loads an expert model, releasing memory from the previous model."""
if expert not in MODEL_CONFIG:
raise ValueError(f"Unknown expert: {expert}")
if self.current_expert != expert:
print(f"Loading expert model: {expert}...")
# Free memory from the current model if it exists
if self.current_model:
del self.current_model
del self.current_tokenizer
torch.cuda.empty_cache()
model_config = MODEL_CONFIG[expert]
self.current_tokenizer = AutoTokenizer.from_pretrained(model_config["name"])
self.current_model = AutoModelForCausalLM.from_pretrained(model_config["name"], torch_dtype=torch.float16).to(self.device)
self.current_expert = expert
print(f"{expert.capitalize()} model loaded.")
return pipeline(
MODEL_CONFIG[expert]["task"],
model=self.current_model,
tokenizer=self.current_tokenizer,
device=self.device
)
def determine_expert_by_keywords(self, question):
"""Determines the expert based on keywords in the question."""
question_lower = question.lower()
for expert, keywords in KEYWORDS.items():
if any(keyword in question_lower for keyword in keywords):
return expert
return None
def determine_expert(self, question):
"""Determines which expert should answer the question."""
expert = self.determine_expert_by_keywords(question)
if expert:
print(f"Expert determined by keyword: {expert}")
return expert
prompt = f"Classify the following question into one of these categories: programming, biology, mathematics. Question: {question}\nCategory:"
response = self.director_pipeline(prompt, max_length=100, num_return_sequences=1)[0]['generated_text']
expert = response.split(":")[-1].strip().lower()
if expert not in MODEL_CONFIG:
expert = "director"
print(f"Redirecting question to: {expert}")
return expert
def generate_response(self, question, expert):
"""Generates a response using the appropriate model."""
try:
model = self.load_expert_model(expert)
prompt = f"Answer the following question as an expert in {expert}: {question}\nAnswer:"
response = model(prompt, max_length=200, num_return_sequences=1)[0]['generated_text']
return response.split("Answer:")[-1].strip()
except Exception as e:
print(f"Error generating response: {str(e)}")
return "Sorry, there was an error processing your request. Please try again."
def chat_interface(self):
"""Simple chat interface."""
print("Welcome to the MOE-LLM chat. Type 'exit' to quit.")
while True:
question = input("\nYou: ")
if question.lower() in ['exit', 'quit']:
break
try:
expert = self.determine_expert(question)
response = self.generate_response(question, expert)
print(f"\n{expert.capitalize()}: {response}")
except Exception as e:
print(f"Error in chat: {str(e)}")
print("Please try asking another question.")
if __name__ == "__main__":
moe_llm = MOELLM()
moe_llm.chat_interface() |
Xenova/convnext-large-224-22k | Xenova | "2024-10-08T13:43:26Z" | 4 | 0 | transformers.js | [
"transformers.js",
"onnx",
"convnext",
"image-classification",
"base_model:facebook/convnext-large-224-22k",
"base_model:quantized:facebook/convnext-large-224-22k",
"region:us"
] | image-classification | "2023-12-02T12:17:59Z" | ---
base_model: facebook/convnext-large-224-22k
library_name: transformers.js
---
https://huggingface.co/facebook/convnext-large-224-22k with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform image classification with `Xenova/convnext-large-224-22k`.
```js
import { pipeline } from '@xenova/transformers';
// Create image classification pipeline
const classifier = await pipeline('image-classification', 'Xenova/convnext-large-224-22k');
// Classify an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg';
const output = await classifier(url);
console.log(output)
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
AmberYifan/Llama-3.1-8B-sft-all-pool | AmberYifan | "2025-01-21T17:56:30Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-18T07:00:51Z" | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-all-pool
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-all-pool
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-all-pool", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/s7e78voy)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu118
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
avemio/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI | avemio | "2025-02-07T10:20:56Z" | 147 | 2 | null | [
"safetensors",
"llama",
"German",
"RAG",
"Retrieval",
"Question-Answering",
"Summarization",
"Reasoning",
"question-answering",
"en",
"de",
"dataset:avemio/German-RAG-CPT-HESSIAN-AI",
"dataset:avemio/German-RAG-SFT-ShareGPT-HESSIAN-AI",
"dataset:avemio/German-RAG-ORPO-ShareGPT-HESSIAN-AI",
"dataset:VAGOsolutions/SauerkrautLM-Fermented-GER-DPO",
"dataset:VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO",
"arxiv:2406.20094",
"base_model:avemio/German-RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI",
"base_model:finetune:avemio/German-RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI",
"license:llama3.1",
"region:us"
] | question-answering | "2024-12-04T17:02:41Z" | ---
license: llama3.1
datasets:
- avemio/German-RAG-CPT-HESSIAN-AI
- avemio/German-RAG-SFT-ShareGPT-HESSIAN-AI
- avemio/German-RAG-ORPO-ShareGPT-HESSIAN-AI
- VAGOsolutions/SauerkrautLM-Fermented-GER-DPO
- VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO
language:
- en
- de
base_model:
- avemio/German-RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI
pipeline_tag: question-answering
tags:
- German
- RAG
- Retrieval
- Question-Answering
- Summarization
- Reasoning
---
# German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI
<!-- Provide a quick summary of what the model is/does. -->
**German-RAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
Our German-RAG-LLAMA-ORPO model are trained on this **[German-RAG-ORPO](https://huggingface.co/datasets/avemio/German-RAG-ORPO-ShareGPT-HESSIAN-AI) dataset.**
## Model Details
The core models released in this batch are the following:
| Size | Training Tokens |
|------|--------|
| [German-RAG-LLAMA-CPT](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-CPT-HESSIAN-AI) | 507.47 million |
| [German-RAG-LLAMA-SFT](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI) | 2.03 billion |
| [German-RAG-LLAMA-ORPO](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI) | 2.0577 billion |
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Avemio AI Team
- **Supported by:** Hessian AI
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** German, English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** [[email protected]](mailto:[email protected])
### Model Sources
<!-- Provide the basic links for the model. -->
- **Training Study:** [Training Study](https://avemio.digital/wp-content/uploads/2025/01/German-RAG-TRAINING-STUDY-Advancing-German-Language-AI-with-hessian-AI.pdf)
- **Repositories:**
- Training: [Colab-Notebook](https://colab.research.google.com/drive/18SH_aYLCnw1K7cRGOTTZ80y98V5Kquxb?usp=sharing)
- Evaluation code:
- [German-RAG-LLM-HARD-BENCHMARK](https://github.com/avemio-digital/German-RAG-LLM-HARD-BENCHMARK.git)
- [German-RAG-LLM-EASY-BENCHMARK](https://github.com/avemio-digital/German-RAG-LLM-EASY-BENCHMARK.git)
- **Technical blog post:**
<!-- - **Press release:** TODO -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Inference
Quickly get inference running with the following required installation:
Now, proceed as usual with HuggingFace:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "avemio/German-RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
im_end_token_id = tokenizer.convert_tokens_to_ids('<|im_end|>')
im_start_token_id = tokenizer.convert_tokens_to_ids('<|im_start|>')
messages = [
{"role": "system", "content": "Folge den Anweisungen des Benutzers. Bevor du deine finale Antwort gibst, schildere deine Überlegungen zur Lösung des Problems."},
{"role": "user", "content": "Ferdinand steht vor der Herausforderung, eine faire Besuchsregelung für seine drei Kinder zu finden, die den Bedürfnissen jedes einzelnen Kindes gerecht wird. Jedes Kind hat unterschiedliche Vorlieben und Bedürfnisse, die in den Besuchsplan integriert werden müssen. Er muss sicherstellen, dass die Regelung sowohl den Interessen der Kinder als auch den rechtlichen Vorgaben entspricht. Ferdinand hat eine Woche Zeit, um einen Vorschlag zu erarbeiten, den er mit seinem Anwalt besprechen kann."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=False
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_length=2024,
temperature=0.01,
do_sample=False,
#bos_token_id=im_start_token_id,
eos_token_id=im_end_token_id,
pad_token_id=tokenizer.eos_token_id,
repetition_penalty=1.1,
num_return_sequences=1,
top_k=40,
top_p=0.95,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### [](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct#processing-long-texts)
### Fine-tuning
We are providing a comprehensive Google Colab notebook to guide users through the process of fine-tuning our model, complete with detailed instructions, essential dependencies, and configurable settings.
[Colab-Notebook](https://colab.research.google.com/drive/18SH_aYLCnw1K7cRGOTTZ80y98V5Kquxb?usp=sharing).
## German-RAG-LLM-EASY-BENCHMARK EVAL
<!-- This section describes the evaluation protocols and provides the results. -->
The evaluation was performed using seven subsets, focusing on extraction recall, question answering (QA) with multiple references, and time difference reasoning. Relevant context and summarization were treated as distinct subsets, each playing a crucial role in the evaluation process. For relevant context, the model's ability to identify and extract pertinent information from the source material was assessed. In contrast, the summarization subset evaluated the model's capability to generate concise and accurate summaries based on the relevant context.
Four evaluation metrics were employed across all subsets: language quality, overall correctness, instruction following, and an overall score.
- **Language quality:** This metric focused on the overall linguistic quality of the outputs, considering factors such as grammar, fluency, and clarity.
- **Overall correctness:** The accuracy and correctness of the content were evaluated under this metric.
- **Instruction following:** This metric assessed the model's ability to follow specific instructions provided for each task.
- **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
| Metric | [Vanila-llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [German-RAG-LLAMA-SFT](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-SFT-HESSIAN-AI) | **[German-RAG-LLAMA-ORPO](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI)** | [German-RAG-LLAMA-MERGED]| GPT-3.5-TURBO |
|------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
| Average Language Quality |87.78 |88.93 | **88.93** |86.93 |87.58 |
| **OVERALL SCORES (weighted):** | | | | | |
| extraction_recall | 66.1 | 73.2 | **66.3** |61.8 |66.9 |
| qa_multiple_references | 74.7 | 91.5 | **90.9** |84.8 |90.3 |
| qa_without_time_difference | 83.5 | 90.7 | **91.4** |88.0 |89.9 |
| qa_with_time_difference | 86.7 | 91.4 | **91.8** |89.1 |90.6 |
| relevant_context | 87.9 | 90.3 | **89.6** |84.4 |88.5 |
| summarizations | 88.6 | 90.7 | **82.7** |84.9 |87.7 |
## German-RAG-LLM-HARD-BENCHMARK EVAL
<img src="https://avemio.digital/wp-content/uploads/2025/01/German-RAG-LLAMA-ORPO.png" alt="German-RAG Logo" width="700" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
| Metric | [Vanila-llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [**[German-RAG-LLAMA-ORPO](https://huggingface.co/avemio/German-RAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI)** | GPT-3.5-TURBO | GPT-4o | GPT-4o-mini |
|-------------------------|-----------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|----------------|---------|-------------|
| **OVERALL SCORES (weighted):** | | | | | |
| hard_reasoning_de | 39.1 | **42.5** | 37.9 | 62.9 | 58.4 |
| hard_reasoning_en | 54.9 | **55.6** | 48.3 | 61.7 | 62.9 |
## Model Details
### Data
For training data details, please see the [German-RAG-ORPO-Dataset](https://huggingface.co/datasets/avemio/German-RAG-ORPO-ShareGPT-HESSIAN-AI) documentation.
The ORPO Tasks Dataset represents a specialized collection for fine-tuning language models with a focus on RAG-specific capabilities.
The subsets can be for this training step are derived from 3 different sources:
- **SauerkrautLM Preference Datasets**:
- [SauerkrautLM-Fermented-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-GER-DPO): is a specialized dataset designed for training language models in function calling irrelevance detection using Preference Optimization. The dataset consists of 2,000 carefully evaluated instruction-response pairs, specifically curated to help models recognize situations where function calls are unnecessary and direct responses are more appropriate.
- [SauerkrautLM-Fermented-Irrelevance-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO): is a high-quality German instruction-response dataset specifically designed for Preference Optimization training. The dataset consists of 3,305 instruction-response pairs. Rather than being merged from existing German datasets, it was carefully created through a sophisticated augmentation process, transforming curated English instructions and responses into culturally adapted German content. Each pair includes comprehensive quality metrics and rejected responses for Preference training.
- **Hard Reasoning DE & EN**: Synthetic generation inspired by Tencent's ([“Scaling Synthetic Data Creation with 1,000,000,000 Personas”](https://arxiv.org/abs/2406.20094)).
- **Multi-Turn-QA**: Developed by Avemio AG, this dataset builds upon and enhances the German Wikipedia dump provided by Cohere ([wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings)), expanding it with synthetic examples and structured tasks to create a robust training resource.
### Data Subsets
| Subset | Examples per Task |
|-------|------------------|
| SauerkrautLM-Fermented-GER-DPO | 3.31k |
| SauerkrautLM-Fermented-Irrelevance-GER-DPO | 2k |
| hard-reasoning-de | 3.19k |
| hard-reasoning-en | 1.97k |
| multi-turn-qa | 3.2k |
### Source Data: SauerkrautLM
[SauerkrautLM-Fermented-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-GER-DPO)
[SauerkrautLM-Fermented-Irrelevance-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO)
### Source Data: Hard-Reasoning DE & EN
- Base: ([proj-Persona/PersonaHub](https://huggingface.co/datasets/proj-persona/PersonaHub))
- Enhancement: Synthetic data generation by Avemio AG
- Quality: Automatic validation and curation of examples by Open Source LLM's
### Methodology: Reasoning-DE & Reasoning-EN
- Providing Persona Descriptions and rewriting in a similar style with a different focus area and name in german/english language
- Generating Simple Logical Problems out of Persona-specific Views & Language.
- Generating Approaches, Thinking-Steps & Solutions separately verified by Llama-3.1-405B-Instruct
- Quality assurance and validation
### Source Data: Multi-Turn-QA
- Base: ([cohere/wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings))
- Enhancement: Synthetic data generation by Avemio AG
- Quality: Automatic validation and curation of examples by Open Source LLM's
### Methodology: Multi-Turn-QA
1. Extraction of base content from German Wikipedia
2. Enhancement through synthetic example generation
3. Structure addition for specific task types
4. Quality assurance and validation
### Architecture
| Parameter | German-RAG-LLAMA-ORPO |
|-----------------------|-----------------------------------------------------------------------------------------------|
| **d_model** | 3072 |
| **num heads** | 32 |
| **num layers** | 32 |
| **MLP ratio** | 3.5 |
| **LayerNorm type** | RMSNorm |
| **pos embeddings** | RoPE |
| **attention variant**| Standard Multi-Head Self Attention |
| **biases** | none |
| **block type** | sequential |
| **activation** | SiLU |
| **sequence length** | 131072 |
| **weight typing** | bfloat16
### Hyperparameters
| Parameter | German-RAG-LLAMA-ORPO |
|---------------------------|--------------------|
| **warmup steps** | 50 |
| **peak LR** | 5.0E-07 |
| **weight decay** | 0.1 |
| **LR schedule** | linear |
| **gradient reduce dtype** | FP32 |
| **optimizer state dtype** | FP32 |
## Environmental Impact
German-RAG-LLAMA-ORPO, running on NVIDIA A100 with 80 GPUs for 4 days, has an approximate power consumption as follows:
It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
| Model | GPU Type | Power Consumption From GPUs |
|----------------|---------------------|-----------------------------|
| German-RAG-LLAMA-ORPO | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.01843 MWh |
## Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
Otherwise, many facts from German-RAG-LLAMA-ORPO or any LLM will often not be true, so they should be checked.
## The German-RAG AI Team
[Marcel Rosiak](https://de.linkedin.com/in/marcel-rosiak)
[Soumya Paul](https://de.linkedin.com/in/soumya-paul-1636a68a)
[Siavash Mollaebrahim](https://de.linkedin.com/in/siavash-mollaebrahim-4084b5153?trk=people-guest_people_search-card)
[Zain ul Haq](https://de.linkedin.com/in/zain-ul-haq-31ba35196)
|
SKNahin/NER_TinyBert | SKNahin | "2024-02-12T10:29:03Z" | 93 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-02-12T08:44:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
albertus-sussex/veriscrape-simcse-job-reference_6_to_verify_4-fold-6 | albertus-sussex | "2025-03-26T20:07:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-26T20:07:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Hermes-3-Llama-3.2-3B-abliterated-Q4_K_M-GGUF | Triangle104 | "2024-12-19T13:38:39Z" | 20 | 1 | transformers | [
"transformers",
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"roleplaying",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:huihui-ai/Hermes-3-Llama-3.2-3B-abliterated",
"base_model:quantized:huihui-ai/Hermes-3-Llama-3.2-3B-abliterated",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-19T13:38:24Z" | ---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
base_model: huihui-ai/Hermes-3-Llama-3.2-3B-abliterated
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin
Buu to destroy the world.
library_name: transformers
model-index:
- name: Hermes-3-Llama-3.2-3B-abliterated
results: []
---
# Triangle104/Hermes-3-Llama-3.2-3B-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Hermes-3-Llama-3.2-3B-abliterated`](https://huggingface.co/huihui-ai/Hermes-3-Llama-3.2-3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Hermes-3-Llama-3.2-3B-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Hermes-3-Llama-3.2-3B-abliterated-Q4_K_M-GGUF --hf-file hermes-3-llama-3.2-3b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Hermes-3-Llama-3.2-3B-abliterated-Q4_K_M-GGUF --hf-file hermes-3-llama-3.2-3b-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Hermes-3-Llama-3.2-3B-abliterated-Q4_K_M-GGUF --hf-file hermes-3-llama-3.2-3b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Hermes-3-Llama-3.2-3B-abliterated-Q4_K_M-GGUF --hf-file hermes-3-llama-3.2-3b-abliterated-q4_k_m.gguf -c 2048
```
|
twburns/googleshopping_mlm_Distilled_Roberta-ft | twburns | "2024-11-12T22:02:37Z" | 128 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-11-12T19:59:19Z" | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: googleshopping_mlm_Distilled_Roberta-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# googleshopping_mlm_Distilled_Roberta-ft
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5649 | 1.0 | 1563 | 2.3878 |
| 2.4516 | 2.0 | 3126 | 2.3172 |
| 2.3985 | 3.0 | 4689 | 2.2630 |
| 2.345 | 4.0 | 6252 | 2.2162 |
| 2.3087 | 5.0 | 7815 | 2.1923 |
| 2.301 | 6.0 | 9378 | 2.1680 |
| 2.2698 | 7.0 | 10941 | 2.1470 |
| 2.2539 | 8.0 | 12504 | 2.1523 |
| 2.2574 | 9.0 | 14067 | 2.1536 |
| 2.2478 | 10.0 | 15630 | 2.1343 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.1
|
Cohee/bert-base-uncased-emotion-onnx | Cohee | "2023-10-20T10:35:47Z" | 13 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"text-classification",
"emotion",
"pytorch",
"en",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-20T10:29:48Z" | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- accuracy
---
[nateraw/bert-base-uncased-emotion](https://huggingface.co/nateraw/bert-base-uncased-emotion) converted to ONNX and quantized using optimum.
---
# bert-base-uncased-emotion
## Model description
`bert-base-uncased` finetuned on the emotion dataset using PyTorch Lightning. Sequence length 128, learning rate 2e-5, batch size 32, 2 GPUs, 4 epochs.
For more details, please see, [the emotion dataset on nlp viewer](https://huggingface.co/nlp/viewer/?dataset=emotion).
#### Limitations and bias
- Not the best model, but it works in a pinch I guess...
- Code not available as I just hacked this together.
- [Follow me on github](https://github.com/nateraw) to get notified when code is made available.
## Training data
Data came from HuggingFace's `datasets` package. The data can be viewed [on nlp viewer](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
...
## Eval results
val_acc - 0.931 (useless, as this should be precision/recall/f1)
The score was calculated using PyTorch Lightning metrics.
|
BarBarickoza/Dans-SakuraKaze-V1.0.0-12b-Q6_K-GGUF | BarBarickoza | "2025-02-14T17:10:09Z" | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-S",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"dataset:PocketDoc/Dans-Personamaxx",
"dataset:PocketDoc/Dans-Personamaxx-Rainy",
"dataset:PocketDoc/Dans-Personamaxx-C1",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:quantized:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-14T17:09:25Z" | ---
license: apache-2.0
datasets:
- PocketDoc/Dans-Prosemaxx-Cowriter-3-S
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue
- PocketDoc/Dans-Personamaxx-VN
- PocketDoc/Dans-Personamaxx
- PocketDoc/Dans-Personamaxx-Rainy
- PocketDoc/Dans-Personamaxx-C1
language:
- en
base_model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
tags:
- llama-cpp
- gguf-my-repo
---
# BarBarickoza/Dans-SakuraKaze-V1.0.0-12b-Q6_K-GGUF
This model was converted to GGUF format from [`PocketDoc/Dans-SakuraKaze-V1.0.0-12b`](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BarBarickoza/Dans-SakuraKaze-V1.0.0-12b-Q6_K-GGUF --hf-file dans-sakurakaze-v1.0.0-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BarBarickoza/Dans-SakuraKaze-V1.0.0-12b-Q6_K-GGUF --hf-file dans-sakurakaze-v1.0.0-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BarBarickoza/Dans-SakuraKaze-V1.0.0-12b-Q6_K-GGUF --hf-file dans-sakurakaze-v1.0.0-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BarBarickoza/Dans-SakuraKaze-V1.0.0-12b-Q6_K-GGUF --hf-file dans-sakurakaze-v1.0.0-12b-q6_k.gguf -c 2048
```
|
kiranpantha/whisper-large-v3-nepali-peft-lora-speaker1-rank16-targetxqv-epochs3 | kiranpantha | "2025-01-24T21:50:50Z" | 54 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ne",
"dataset:kiranpantha/OpenSLR54-Balanced-Nepali",
"base_model:kiranpantha/whisper-large-v3-nepali",
"base_model:adapter:kiranpantha/whisper-large-v3-nepali",
"license:apache-2.0",
"region:us"
] | null | "2025-01-24T21:50:47Z" | ---
library_name: peft
language:
- ne
license: apache-2.0
base_model: kiranpantha/whisper-large-v3-nepali
tags:
- generated_from_trainer
datasets:
- kiranpantha/OpenSLR54-Balanced-Nepali
model-index:
- name: kiranpantha/whisper-large-v3-nepali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kiranpantha/whisper-large-v3-nepali
This model is a fine-tuned version of [kiranpantha/whisper-large-v3-nepali](https://huggingface.co/kiranpantha/whisper-large-v3-nepali) on the OpenSLR54 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 14 | 0.3622 |
| 0.47 | 2.0 | 28 | 0.3369 |
| 0.47 | 3.0 | 42 | 0.3355 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cxx11.abi
- Datasets 3.2.0
- Tokenizers 0.21.0 |
Shaleen123/yi_6b_medical_qa_full | Shaleen123 | "2023-12-17T18:27:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:01-ai/Yi-6B-Chat",
"base_model:adapter:01-ai/Yi-6B-Chat",
"region:us"
] | null | "2023-12-17T18:27:15Z" | ---
library_name: peft
base_model: 01-ai/Yi-6B-Chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
JFernandoGRE/bert_mlm_ESAU_CHANCO_CASTILLON | JFernandoGRE | "2024-08-12T07:20:27Z" | 5 | 0 | null | [
"safetensors",
"bert",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"region:us"
] | null | "2024-08-08T09:36:56Z" | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_mlm_ESAU_CHANCO_CASTILLON
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_mlm_ESAU_CHANCO_CASTILLON
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5985 | 1.0 | 339 | 1.9337 |
| 1.9083 | 2.0 | 678 | 1.6088 |
| 1.7069 | 3.0 | 1017 | 1.5298 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
testliai-main/testliai-generate-exam-mistral-7b-instruct-v0.3-bnb-4bit-GGUF-q4_k_m | testliai-main | "2025-03-03T13:49:06Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-03T13:47:22Z" | ---
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** testliai-main
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
baby-dev/dd18380a-0543-4a7e-a363-36450150c4fd | baby-dev | "2025-02-24T07:13:13Z" | 0 | 0 | peft | [
"peft",
"qwen2",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"region:us"
] | null | "2025-02-24T07:13:06Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2-1.5B-Instruct
model-index:
- name: baby-dev/dd18380a-0543-4a7e-a363-36450150c4fd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/dd18380a-0543-4a7e-a363-36450150c4fd
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
taha454/poca-SoccerTwos | taha454 | "2025-04-07T00:35:13Z" | 0 | 0 | ml-agents | [
"ml-agents",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2025-04-07T00:35:12Z" | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: taha454/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jackkuo/PaperExtractGPT | jackkuo | "2024-12-05T13:05:45Z" | 6 | 1 | peft | [
"peft",
"paper",
"paper extract",
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-09-02T09:30:20Z" | ---
library_name: peft
license: bigscience-openrail-m
tags:
- paper
- paper extract
---
## Training procedure
### Framework versions
- PEFT 0.4.0
in https://github.com/hiyouga/ChatGLM-Efficient-Tuning/tree/main
CUDA_VISIBLE_DEVICES=3 nohup python src/web_demo.py \
--model_name_or_path /HOME/jack/model/chatglm-6b \
--checkpoint_dir paper_meta\ \
> log_web_demo.txt 2>&1 & tail -f log_web_demo.txt
### 🚩Citation
Please cite the following paper if you use jackkuo/PaperExtractGPT in your work.
```bibtex
@INPROCEEDINGS{10412837,
author={Guo, Menghao and Wu, Fan and Jiang, Jinling and Yan, Xiaoran and Chen, Guangyong and Li, Wenhui and Zhao, Yunhong and Sun, Zeyi},
booktitle={2023 IEEE International Conference on Knowledge Graph (ICKG)},
title={Investigations on Scientific Literature Meta Information Extraction Using Large Language Models},
year={2023},
volume={},
number={},
pages={249-254},
keywords={Measurement;Knowledge graphs;Information retrieval;Data mining;Task analysis;information extraction;large language model;scientific literature},
doi={10.1109/ICKG59574.2023.00036}}
``` |
SuyashPro97/llava-next-cholec | SuyashPro97 | "2025-04-04T09:45:26Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-04T09:45:25Z" | ---
license: apache-2.0
---
|
victorauyeungxf2/Victor_Instruct-v0.1 | victorauyeungxf2 | "2024-06-05T07:00:19Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | "2024-06-05T06:59:59Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Holarissun/dpo_helpfulhelpful_human_gamma100.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06 | Holarissun | "2024-04-25T13:40:45Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-04-25T13:40:38Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: dpo_helpfulhelpful_human_gamma100.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_helpfulhelpful_human_gamma100.0_beta0.1_subset-1_modelmistral7b_maxsteps5000_bz8_lr1e-06
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
kiranpantha/whisper-large-v3-nepali-9010-peft-lora-speakerSpeakerCV3-rank32-targetxckv-epochs3 | kiranpantha | "2025-02-10T16:37:29Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"ne",
"dataset:kiranpantha/dataset-for-peft-cv-nepds",
"base_model:kiranpantha/whisper-large-v3-nepali",
"base_model:adapter:kiranpantha/whisper-large-v3-nepali",
"license:apache-2.0",
"region:us"
] | null | "2025-02-09T19:25:44Z" | ---
library_name: peft
language:
- ne
license: apache-2.0
base_model: kiranpantha/whisper-large-v3-nepali
tags:
- generated_from_trainer
datasets:
- kiranpantha/dataset-for-peft-cv-nepds
model-index:
- name: kiranpantha/whisper-large-v3-nepali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kiranpantha/whisper-large-v3-nepali
This model is a fine-tuned version of [kiranpantha/whisper-large-v3-nepali](https://huggingface.co/kiranpantha/whisper-large-v3-nepali) on the OpenSLR54 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 18 | 0.2656 |
| 0.5352 | 2.0 | 36 | 0.2471 |
| 0.198 | 3.0 | 54 | 0.2394 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cxx11.abi
- Datasets 3.2.0
- Tokenizers 0.21.0 |
learn05/VisiGen | learn05 | "2025-03-10T21:57:23Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-10T21:49:32Z" | ---
license: apache-2.0
---
|
mayaseale/Monkey_Image_Classifier | mayaseale | "2024-04-03T21:13:43Z" | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | "2024-04-03T18:08:04Z" | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | RMSprop |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | 100 |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-06 |
| rho | 0.9 |
| momentum | 0.0 |
| epsilon | 1e-07 |
| centered | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
tango-ai/neuron_Llama-2-7b-chat-hf_cores-12-cast-fp16-sqln-4096-bs-1 | tango-ai | "2024-03-21T15:15:26Z" | 1 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"conversational",
"en",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-03-21T12:15:34Z" | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Sophie-Rain-SpiderMan-Leaks-Video/WaTcH-Sophie.Rain.Spiderman.Tutorial.Viral.Full.Video | Sophie-Rain-SpiderMan-Leaks-Video | "2025-02-14T09:35:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-14T08:40:33Z" | <a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️</a></p>
<p><a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤 Download❤️❤️⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain MMS Video Leaked Video Telegram Links - Hot Sophie Rain Videos L𝚎aked Video Indian Hot MMS Leaked Original Video Viral Video L𝚎aked on X Twitter Telegram [-wATCH-]— Indian Hot MMS Leaked Video Original Video Link Indian Hot MMS Leaked Video Viral On Social Media X Trending Now[-wATCH-]— Indian Hot MMS Leaked Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ[-wATCH-]— Indian Hot MMS Leaked Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ[-wATCH-]— Indian Hot MMS Leaked Video Original Video Link Indian Hot MMS Leaked Video Viral On Social Media X Trending NowIndian Hot MMS Leaked Original Video video took the internet by storm and amazed viewers on various social media platforms. Indian Hot MMS Leaked, a young and talented digital creator, recently became famous thanks to this interesting video.L𝚎aked Video Indian Hot MMS Leaked Original Video Viral Video L𝚎aked on X TwitterIndian Hot MMS Leaked Original Video video oficial twitterL𝚎aked Video Indian Hot MMS Leaked Original Video Viral Video L𝚎aked on X Twitter. |
joshjrreynaldo/image_classification | joshjrreynaldo | "2024-02-15T05:03:06Z" | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-02-13T13:35:22Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.55625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2685
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.2944 | 0.5312 |
| No log | 2.0 | 80 | 1.2047 | 0.5625 |
| No log | 3.0 | 120 | 1.2956 | 0.5125 |
| No log | 4.0 | 160 | 1.2328 | 0.5312 |
| No log | 5.0 | 200 | 1.1533 | 0.575 |
| No log | 6.0 | 240 | 1.2436 | 0.5375 |
| No log | 7.0 | 280 | 1.2940 | 0.5437 |
| No log | 8.0 | 320 | 1.2115 | 0.5875 |
| No log | 9.0 | 360 | 1.2147 | 0.5625 |
| No log | 10.0 | 400 | 1.1741 | 0.5625 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
thrunlab/mistral_sparse_80_percent_cola_1000 | thrunlab | "2024-02-06T12:39:19Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2024-02-06T11:41:39Z" | ---
tags:
- trl
- sft
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mistral_sparse_80_percent_cola_1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_sparse_80_percent_cola_1000
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3726
- Accuracy: 0.8441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4653 | 0.22 | 50 | 0.5302 | 0.7405 |
| 0.5191 | 0.44 | 100 | 0.4846 | 0.7638 |
| 0.5233 | 0.66 | 150 | 0.4720 | 0.7701 |
| 0.4905 | 0.88 | 200 | 0.4463 | 0.7802 |
| 0.3672 | 1.1 | 250 | 0.4354 | 0.7927 |
| 0.3929 | 1.32 | 300 | 0.4171 | 0.8028 |
| 0.3643 | 1.54 | 350 | 0.4110 | 0.7997 |
| 0.324 | 1.76 | 400 | 0.3927 | 0.8231 |
| 0.3639 | 1.98 | 450 | 0.4550 | 0.7747 |
| 0.3293 | 2.2 | 500 | 0.4191 | 0.8309 |
| 0.3072 | 2.42 | 550 | 0.4059 | 0.8184 |
| 0.3131 | 2.64 | 600 | 0.3780 | 0.8363 |
| 0.3821 | 2.86 | 650 | 0.3804 | 0.8301 |
| 0.2741 | 3.08 | 700 | 0.3789 | 0.8394 |
| 0.258 | 3.3 | 750 | 0.3984 | 0.8394 |
| 0.2316 | 3.52 | 800 | 0.3998 | 0.8363 |
| 0.1955 | 3.74 | 850 | 0.3799 | 0.8465 |
| 0.2266 | 3.96 | 900 | 0.3750 | 0.8426 |
| 0.1476 | 4.18 | 950 | 0.4402 | 0.8332 |
| 0.1088 | 4.4 | 1000 | 0.4813 | 0.8316 |
| 0.1872 | 4.62 | 1050 | 0.4342 | 0.8410 |
| 0.1248 | 4.84 | 1100 | 0.4700 | 0.8472 |
| 0.108 | 5.05 | 1150 | 0.4632 | 0.8472 |
| 0.1437 | 5.27 | 1200 | 0.6568 | 0.8387 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
DavidAU/L3-MOE-4X8B-Grand-Horror-25B | DavidAU | "2024-12-16T23:48:09Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"moe",
"mixture of experts",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-16T07:58:55Z" | ---
library_name: transformers
tags:
- mergekit
- moe
- mixture of experts
- merge
base_model: []
---
<h2>L3-MOE-4X8B-Grand-Horror-25B</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
NOTE: Links to GGUFs below.
<B>About This Model:</B>
This model is based on the original "Llama 3 Dark Planet 8B" (<a href="https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF">GGUF</a> /
<a href="https://huggingface.co/DavidAU/L3-Dark-Planet-8B">SOURCE</a>) - which contains 3 different models and I have added "Gutenberg 8B" [https://huggingface.co/nbeerbower/llama-3-gutenberg-8B]
as the forth model for this MOE.
This model contains FOUR different 8B models in a MOE model at 25B, equal to 4X8B - 32B parameters.
SIDE NOTE:
Uusually a "MOE" is constructed with different models, to give the "moe model" some of the best of each (or not) during generation.
I felt turning this concept on its head was better for creative use cases.
I.E:
All the "chefs" in the kitchen went to the same elite cooking school, got the highest marks, and now all work together to make the
very best "dish of tokens" they can every time.
POWER UP? or DOWN?
You can change the number of experts (models) activated inside many LLM/AI apps.
Turning it up increases quality, nuance and depth but at the same time the tokens per second drops accordingly.
You can use 1 expert for "draft mode", and then move up in experts to get to final draft.
Also note instruction following will also increase as you up the number of experts too.
Quant choice will also affect overall quality => higher is better, however even at the lowest quant level, this model
will perform strongly.
MOE SPECIFIC NOTES:
If you want to change the "default" number of experts set, modify the "config.json" :
"num_experts_per_tok": 2,
The user will still be able to modify it, if the LLM/AI app has the setting option to do this.
Each time you add/subtract an expert the token per second speed will change.
( this model is set at 2 out of 4 experts active, more experts => greater quality. )
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
[ https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B-GGUF ]
---
Quants by Team "Mradermacher":
GGUFS:
[ https://huggingface.co/mradermacher/L3-MOE-4X8B-Grand-Horror-25B-GGUF ]
IMATRIX GGUFS:
[ https://huggingface.co/mradermacher/L3-MOE-4X8B-Grand-Horror-25B-i1-GGUF ]
|
hkivancoral/hushem_1x_beit_base_sgd_0001_fold2 | hkivancoral | "2023-11-25T18:53:34Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-25T18:44:45Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_beit_base_sgd_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.26666666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_beit_base_sgd_0001_fold2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4763
- Accuracy: 0.2667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.5508 | 0.2667 |
| 1.5993 | 2.0 | 12 | 1.5460 | 0.2667 |
| 1.5993 | 3.0 | 18 | 1.5415 | 0.2667 |
| 1.5379 | 4.0 | 24 | 1.5376 | 0.2667 |
| 1.5842 | 5.0 | 30 | 1.5337 | 0.2667 |
| 1.5842 | 6.0 | 36 | 1.5302 | 0.2667 |
| 1.5559 | 7.0 | 42 | 1.5267 | 0.2667 |
| 1.5559 | 8.0 | 48 | 1.5233 | 0.2667 |
| 1.5583 | 9.0 | 54 | 1.5200 | 0.2667 |
| 1.5216 | 10.0 | 60 | 1.5170 | 0.2667 |
| 1.5216 | 11.0 | 66 | 1.5141 | 0.2667 |
| 1.5475 | 12.0 | 72 | 1.5116 | 0.2667 |
| 1.5475 | 13.0 | 78 | 1.5088 | 0.2667 |
| 1.5228 | 14.0 | 84 | 1.5063 | 0.2667 |
| 1.5337 | 15.0 | 90 | 1.5038 | 0.2667 |
| 1.5337 | 16.0 | 96 | 1.5015 | 0.2667 |
| 1.5424 | 17.0 | 102 | 1.4994 | 0.2667 |
| 1.5424 | 18.0 | 108 | 1.4973 | 0.2667 |
| 1.5261 | 19.0 | 114 | 1.4953 | 0.2667 |
| 1.5374 | 20.0 | 120 | 1.4936 | 0.2667 |
| 1.5374 | 21.0 | 126 | 1.4921 | 0.2667 |
| 1.5211 | 22.0 | 132 | 1.4905 | 0.2667 |
| 1.5211 | 23.0 | 138 | 1.4888 | 0.2667 |
| 1.5308 | 24.0 | 144 | 1.4875 | 0.2667 |
| 1.501 | 25.0 | 150 | 1.4863 | 0.2667 |
| 1.501 | 26.0 | 156 | 1.4851 | 0.2667 |
| 1.4969 | 27.0 | 162 | 1.4841 | 0.2667 |
| 1.4969 | 28.0 | 168 | 1.4832 | 0.2667 |
| 1.4796 | 29.0 | 174 | 1.4822 | 0.2667 |
| 1.5135 | 30.0 | 180 | 1.4813 | 0.2667 |
| 1.5135 | 31.0 | 186 | 1.4804 | 0.2667 |
| 1.4924 | 32.0 | 192 | 1.4797 | 0.2667 |
| 1.4924 | 33.0 | 198 | 1.4791 | 0.2667 |
| 1.4838 | 34.0 | 204 | 1.4785 | 0.2667 |
| 1.4833 | 35.0 | 210 | 1.4779 | 0.2667 |
| 1.4833 | 36.0 | 216 | 1.4775 | 0.2667 |
| 1.4826 | 37.0 | 222 | 1.4771 | 0.2667 |
| 1.4826 | 38.0 | 228 | 1.4768 | 0.2667 |
| 1.5058 | 39.0 | 234 | 1.4766 | 0.2667 |
| 1.4814 | 40.0 | 240 | 1.4764 | 0.2667 |
| 1.4814 | 41.0 | 246 | 1.4764 | 0.2667 |
| 1.4809 | 42.0 | 252 | 1.4763 | 0.2667 |
| 1.4809 | 43.0 | 258 | 1.4763 | 0.2667 |
| 1.5264 | 44.0 | 264 | 1.4763 | 0.2667 |
| 1.4935 | 45.0 | 270 | 1.4763 | 0.2667 |
| 1.4935 | 46.0 | 276 | 1.4763 | 0.2667 |
| 1.4909 | 47.0 | 282 | 1.4763 | 0.2667 |
| 1.4909 | 48.0 | 288 | 1.4763 | 0.2667 |
| 1.4851 | 49.0 | 294 | 1.4763 | 0.2667 |
| 1.5045 | 50.0 | 300 | 1.4763 | 0.2667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Bonosa2/huggy-ppo | Bonosa2 | "2023-06-02T23:20:35Z" | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-06-02T23:18:37Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: bonosa2/huggy-ppo
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
xianbin/ppo-Pyramid | xianbin | "2023-07-27T07:44:22Z" | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-07-27T07:44:14Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: xianbin/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
devvanshhh/flanT5-xl-3 | devvanshhh | "2023-11-16T09:35:30Z" | 0 | 0 | peft | [
"peft",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"arxiv:1910.09700",
"base_model:ybelkada/flan-t5-xl-sharded-bf16",
"base_model:adapter:ybelkada/flan-t5-xl-sharded-bf16",
"region:us"
] | null | "2023-11-15T10:45:59Z" | ---
library_name: peft
base_model: ybelkada/flan-t5-xl-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
hli/xlm-roberta-base-finetuned-panx-de | hli | "2022-12-13T00:41:37Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-12-13T00:07:40Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8638300289723342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 |
| 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 |
| 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
chali12/en_pipeline | chali12 | "2022-07-03T08:00:55Z" | 6 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | "2022-07-03T07:55:19Z" | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.987456883
- name: NER Recall
type: recall
value: 0.9707151665
- name: NER F Score
type: f_score
value: 0.9790144567
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.4,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 97.90 |
| `ENTS_P` | 98.75 |
| `ENTS_R` | 97.07 |
| `TOK2VEC_LOSS` | 248450.31 |
| `NER_LOSS` | 58657.11 | |
aXhyra/demo_sentiment_1234567 | aXhyra | "2021-12-13T23:06:38Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- f1
model-index:
- name: demo_sentiment_1234567
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: F1
type: f1
value: 0.7113620044371958
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo_sentiment_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6332
- F1: 0.7114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.62486660723695e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7592 | 1.0 | 713 | 0.6509 | 0.6834 |
| 0.6389 | 2.0 | 1426 | 0.6318 | 0.7011 |
| 0.5647 | 3.0 | 2139 | 0.6320 | 0.7041 |
| 0.5391 | 4.0 | 2852 | 0.6332 | 0.7114 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
arviszeile/autotrain-golf-winner-2-87274143423 | arviszeile | "2023-09-05T18:48:33Z" | 3 | 0 | transformers | [
"transformers",
"joblib",
"xgboost",
"autotrain",
"tabular",
"regression",
"tabular-regression",
"dataset:arviszeile/autotrain-data-golf-winner-2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | tabular-regression | "2023-09-05T18:44:51Z" | ---
tags:
- autotrain
- tabular
- regression
- tabular-regression
datasets:
- arviszeile/autotrain-data-golf-winner-2
co2_eq_emissions:
emissions: 0.02273858856897143
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 87274143423
- CO2 Emissions (in grams): 0.0227
## Validation Metrics
- Loss: 0.000
- R2: 1.000
- MSE: 0.000
- MAE: 0.000
- RMSLE: 0.000
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
``` |
Bunpot/lora_model | Bunpot | "2025-02-14T08:41:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-14T08:41:18Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Bunpot
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BrickeaW/model_card_pg | BrickeaW | "2023-07-18T13:36:28Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2023-07-18T13:33:44Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{{ card_data }}
---
# Model Card for {{ model_id | default("Model ID", true) }}
<!-- Provide a quick summary of what the model is/does. -->
{{ model_summary | default("", true) }}
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
{{ model_description | default("", true) }}
- **Developed by:** {{ developers | default("[More Information Needed]", true)}}
- **Shared by [optional]:** {{ shared_by | default("[More Information Needed]", true)}}
- **Model type:** {{ model_type | default("[More Information Needed]", true)}}
- **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}}
- **License:** {{ license | default("[More Information Needed]", true)}}
- **Finetuned from model [optional]:** {{ finetuned_from | default("[More Information Needed]", true)}}
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** {{ repo | default("[More Information Needed]", true)}}
- **Paper [optional]:** {{ paper | default("[More Information Needed]", true)}}
- **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}}
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
{{ direct_use | default("[More Information Needed]", true)}}
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
{{ downstream_use | default("[More Information Needed]", true)}}
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
{{ out_of_scope_use | default("[More Information Needed]", true)}}
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
{{ bias_risks_limitations | default("[More Information Needed]", true)}}
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
{{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", true)}}
## How to Get Started with the Model
Use the code below to get started with the model.
{{ get_started_code | default("[More Information Needed]", true)}}
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
{{ training_data | default("[More Information Needed]", true)}}
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
{{ preprocessing | default("[More Information Needed]", true)}}
#### Training Hyperparameters
- **Training regime:** {{ training_regime | default("[More Information Needed]", true)}} <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
{{ speeds_sizes_times | default("[More Information Needed]", true)}}
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
{{ testing_data | default("[More Information Needed]", true)}}
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
{{ testing_factors | default("[More Information Needed]", true)}}
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
{{ testing_metrics | default("[More Information Needed]", true)}}
### Results
{{ results | default("[More Information Needed]", true)}}
#### Summary
{{ results_summary | default("", true) }}
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
{{ model_examination | default("[More Information Needed]", true)}}
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** {{ hardware | default("[More Information Needed]", true)}}
- **Hours used:** {{ hours_used | default("[More Information Needed]", true)}}
- **Cloud Provider:** {{ cloud_provider | default("[More Information Needed]", true)}}
- **Compute Region:** {{ cloud_region | default("[More Information Needed]", true)}}
- **Carbon Emitted:** {{ co2_emitted | default("[More Information Needed]", true)}}
## Technical Specifications [optional]
### Model Architecture and Objective
{{ model_specs | default("[More Information Needed]", true)}}
### Compute Infrastructure
{{ compute_infrastructure | default("[More Information Needed]", true)}}
#### Hardware
{{ hardware | default("[More Information Needed]", true)}}
#### Software
{{ software | default("[More Information Needed]", true)}}
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
{{ citation_bibtex | default("[More Information Needed]", true)}}
**APA:**
{{ citation_apa | default("[More Information Needed]", true)}}
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
{{ glossary | default("[More Information Needed]", true)}}
## More Information [optional]
{{ more_information | default("[More Information Needed]", true)}}
## Model Card Authors [optional]
{{ model_card_authors | default("[More Information Needed]", true)}}
## Model Card Contact
{{ model_card_contact | default("[More Information Needed]", true)}} |
Seb1711HC/Prueba | Seb1711HC | "2023-11-25T10:02:34Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-11-25T10:02:23Z" | !pip install "sagemaker==2.116.0" "huggingface_hub==0.10.1" --upgrade --quiet
|
nathanialhunt2000/2ba67173-d432-47e0-930b-715d68734be6 | nathanialhunt2000 | "2025-03-09T18:30:36Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b-chat",
"base_model:adapter:unsloth/llama-2-7b-chat",
"region:us"
] | null | "2025-03-09T18:30:20Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/llama-2-7b-chat
model-index:
- name: nathanialhunt2000/2ba67173-d432-47e0-930b-715d68734be6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/2ba67173-d432-47e0-930b-715d68734be6
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
concedo/Pythia-70M-ChatSalad | concedo | "2023-04-07T14:46:25Z" | 1,668 | 6 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-02-01T08:17:14Z" | ---
license: other
language:
- en
inference: false
widget:
- text: "How do I download this model?"
example_title: "Text Gen Example"
---
# Pythia-70M-ChatSalad
This is a follow up finetune of Pythia-70M finetuned on the same dataset as OPT-19M-ChatSalad. It is much more coherent.
All feedback and comments can be directed to Concedo on the KoboldAI discord. |
maxg73872/biobert-v1.1-finetuned-medmcqa-20pct-2024-12-03-T13-24-56 | maxg73872 | "2024-12-03T16:17:30Z" | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-12-03T16:17:07Z" | ---
library_name: transformers
base_model: dmis-lab/biobert-v1.1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: biobert-v1.1-finetuned-medmcqa-20pct-2024-12-03-T13-24-56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-finetuned-medmcqa-20pct-2024-12-03-T13-24-56
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9330
- Accuracy: 0.5671
- F1: 0.5680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000159
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.7195 | 0.9995 | 1142 | 0.9801 | 0.5381 | 0.5402 |
| 0.5306 | 1.9999 | 2285 | 0.9330 | 0.5671 | 0.5680 |
| 0.3505 | 2.9986 | 3426 | 1.0417 | 0.5618 | 0.5628 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mrHunghddddd/17cb6d4f-4a4c-4607-b9ca-24ecccee4be6 | mrHunghddddd | "2025-01-28T21:19:52Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-28T20:40:41Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 17cb6d4f-4a4c-4607-b9ca-24ecccee4be6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e25caa69cd6faad2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e25caa69cd6faad2_train_data.json
type:
field_instruction: en
field_output: es
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHunghddddd/17cb6d4f-4a4c-4607-b9ca-24ecccee4be6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e25caa69cd6faad2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1aaf8e86-778f-40df-995d-ce6a7f0e458d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1aaf8e86-778f-40df-995d-ce6a7f0e458d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 17cb6d4f-4a4c-4607-b9ca-24ecccee4be6
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8269 | 0.0092 | 200 | 1.0085 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/7b1da541-3a01-497b-b0e1-84f357e15afe | ClarenceDan | "2025-01-20T18:24:00Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"base_model:adapter:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"license:llama3",
"region:us"
] | null | "2025-01-20T18:22:06Z" | ---
library_name: peft
license: llama3
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7b1da541-3a01-497b-b0e1-84f357e15afe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3ba8b5afbae0ac7e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ba8b5afbae0ac7e_train_data.json
type:
field_instruction: prompt
field_output: initial_response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/7b1da541-3a01-497b-b0e1-84f357e15afe
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/3ba8b5afbae0ac7e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 89d18bab-4e25-4f8d-8b0e-a5dd7fd66837
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 89d18bab-4e25-4f8d-8b0e-a5dd7fd66837
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7b1da541-3a01-497b-b0e1-84f357e15afe
This model is a fine-tuned version of [scb10x/llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6815 | 0.0013 | 1 | 0.6770 |
| 0.6573 | 0.0040 | 3 | 0.6756 |
| 0.7447 | 0.0081 | 6 | 0.6562 |
| 0.6276 | 0.0121 | 9 | 0.6053 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ROBERTaCoder/wav2vec2-base-timit-demo-google-colab | ROBERTaCoder | "2022-08-24T17:07:25Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-08-24T11:17:35Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5452
- Wer: 0.3296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5557 | 1.0 | 500 | 1.9362 | 1.0072 |
| 0.867 | 2.01 | 1000 | 0.5197 | 0.5173 |
| 0.4281 | 3.01 | 1500 | 0.4609 | 0.4552 |
| 0.3002 | 4.02 | 2000 | 0.4066 | 0.4129 |
| 0.2252 | 5.02 | 2500 | 0.4122 | 0.3952 |
| 0.1857 | 6.02 | 3000 | 0.4650 | 0.3990 |
| 0.1541 | 7.03 | 3500 | 0.4784 | 0.3834 |
| 0.1372 | 8.03 | 4000 | 0.3875 | 0.3805 |
| 0.1213 | 9.04 | 4500 | 0.5606 | 0.4002 |
| 0.1043 | 10.04 | 5000 | 0.4713 | 0.3762 |
| 0.0972 | 11.04 | 5500 | 0.4770 | 0.3692 |
| 0.0876 | 12.05 | 6000 | 0.4755 | 0.3671 |
| 0.0812 | 13.05 | 6500 | 0.4854 | 0.3616 |
| 0.0705 | 14.06 | 7000 | 0.4380 | 0.3659 |
| 0.0759 | 15.06 | 7500 | 0.5025 | 0.3516 |
| 0.0709 | 16.06 | 8000 | 0.5310 | 0.3577 |
| 0.0572 | 17.07 | 8500 | 0.5097 | 0.3561 |
| 0.0572 | 18.07 | 9000 | 0.5150 | 0.3510 |
| 0.0482 | 19.08 | 9500 | 0.4954 | 0.3488 |
| 0.0703 | 20.08 | 10000 | 0.5279 | 0.3512 |
| 0.0457 | 21.08 | 10500 | 0.5336 | 0.3459 |
| 0.036 | 22.09 | 11000 | 0.5471 | 0.3440 |
| 0.0368 | 23.09 | 11500 | 0.5109 | 0.3417 |
| 0.0342 | 24.1 | 12000 | 0.5506 | 0.3415 |
| 0.0318 | 25.1 | 12500 | 0.5291 | 0.3357 |
| 0.03 | 26.1 | 13000 | 0.5347 | 0.3363 |
| 0.026 | 27.11 | 13500 | 0.5475 | 0.3318 |
| 0.0232 | 28.11 | 14000 | 0.5628 | 0.3332 |
| 0.0246 | 29.12 | 14500 | 0.5452 | 0.3296 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
hkivancoral/smids_5x_beit_base_sgd_00001_fold3 | hkivancoral | "2023-12-15T23:25:44Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-15T22:06:40Z" | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_beit_base_sgd_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.39666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_beit_base_sgd_00001_fold3
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1248
- Accuracy: 0.3967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2551 | 1.0 | 375 | 1.3211 | 0.3167 |
| 1.2561 | 2.0 | 750 | 1.3119 | 0.32 |
| 1.2134 | 3.0 | 1125 | 1.3028 | 0.325 |
| 1.226 | 4.0 | 1500 | 1.2942 | 0.325 |
| 1.1635 | 5.0 | 1875 | 1.2859 | 0.3267 |
| 1.2304 | 6.0 | 2250 | 1.2778 | 0.3333 |
| 1.1734 | 7.0 | 2625 | 1.2702 | 0.3383 |
| 1.1724 | 8.0 | 3000 | 1.2625 | 0.3417 |
| 1.1336 | 9.0 | 3375 | 1.2554 | 0.3467 |
| 1.1266 | 10.0 | 3750 | 1.2486 | 0.3517 |
| 1.1276 | 11.0 | 4125 | 1.2419 | 0.355 |
| 1.1538 | 12.0 | 4500 | 1.2355 | 0.355 |
| 1.1425 | 13.0 | 4875 | 1.2292 | 0.3567 |
| 1.1463 | 14.0 | 5250 | 1.2233 | 0.36 |
| 1.1661 | 15.0 | 5625 | 1.2174 | 0.3633 |
| 1.1118 | 16.0 | 6000 | 1.2118 | 0.365 |
| 1.123 | 17.0 | 6375 | 1.2063 | 0.3667 |
| 1.1065 | 18.0 | 6750 | 1.2010 | 0.3667 |
| 1.1074 | 19.0 | 7125 | 1.1959 | 0.365 |
| 1.0742 | 20.0 | 7500 | 1.1911 | 0.3717 |
| 1.0616 | 21.0 | 7875 | 1.1865 | 0.3717 |
| 1.0745 | 22.0 | 8250 | 1.1820 | 0.3717 |
| 1.0871 | 23.0 | 8625 | 1.1777 | 0.3717 |
| 1.031 | 24.0 | 9000 | 1.1737 | 0.3717 |
| 1.0843 | 25.0 | 9375 | 1.1697 | 0.375 |
| 1.0616 | 26.0 | 9750 | 1.1660 | 0.3767 |
| 1.0414 | 27.0 | 10125 | 1.1624 | 0.3783 |
| 1.0303 | 28.0 | 10500 | 1.1590 | 0.3783 |
| 0.9887 | 29.0 | 10875 | 1.1558 | 0.38 |
| 1.0267 | 30.0 | 11250 | 1.1528 | 0.38 |
| 1.0792 | 31.0 | 11625 | 1.1499 | 0.3833 |
| 1.0736 | 32.0 | 12000 | 1.1472 | 0.3883 |
| 1.0868 | 33.0 | 12375 | 1.1446 | 0.39 |
| 1.0257 | 34.0 | 12750 | 1.1422 | 0.3883 |
| 1.0237 | 35.0 | 13125 | 1.1400 | 0.39 |
| 1.0201 | 36.0 | 13500 | 1.1379 | 0.39 |
| 1.0769 | 37.0 | 13875 | 1.1360 | 0.3917 |
| 1.032 | 38.0 | 14250 | 1.1343 | 0.3933 |
| 1.0317 | 39.0 | 14625 | 1.1327 | 0.395 |
| 1.0402 | 40.0 | 15000 | 1.1312 | 0.395 |
| 0.957 | 41.0 | 15375 | 1.1300 | 0.395 |
| 1.0445 | 42.0 | 15750 | 1.1288 | 0.395 |
| 1.0399 | 43.0 | 16125 | 1.1278 | 0.395 |
| 1.0323 | 44.0 | 16500 | 1.1270 | 0.3967 |
| 1.0444 | 45.0 | 16875 | 1.1263 | 0.3967 |
| 0.9983 | 46.0 | 17250 | 1.1257 | 0.3967 |
| 1.042 | 47.0 | 17625 | 1.1253 | 0.3967 |
| 1.0685 | 48.0 | 18000 | 1.1250 | 0.3967 |
| 1.0486 | 49.0 | 18375 | 1.1249 | 0.3967 |
| 1.0457 | 50.0 | 18750 | 1.1248 | 0.3967 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
abhishek/autotrain-dog-vs-food | abhishek | "2022-06-22T14:51:28Z" | 61 | 1 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain",
"dataset:abhishek/autotrain-data-vision_652fee16113a4f07a2452e021a22a934",
"dataset:sasha/dog-food",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-06-22T10:33:54Z" | ---
tags: autotrain
datasets:
- abhishek/autotrain-data-vision_652fee16113a4f07a2452e021a22a934
- sasha/dog-food
co2_eq_emissions: 2.050948967287266
model-index:
- name: autotrain-dog-vs-food
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: sasha/dog-food
type: sasha/dog-food
metrics:
- name: Accuracy
type: accuracy
value: 0.9976190476190476
- task:
type: image-classification
name: Image Classification
dataset:
name: sasha/dog-food
type: sasha/dog-food
config: sasha--dog-food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 1.0
verified: true
- name: Precision
type: precision
value: 1.0
verified: true
- name: Recall
type: recall
value: 1.0
verified: true
- name: AUC
type: auc
value: 1.0
verified: true
- name: F1
type: f1
value: 1.0
verified: true
- name: loss
type: loss
value: 0.001115015591494739
verified: true
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 264300
- CO2 Emissions (in grams): 2.050948967287266
## Validation Metrics
- Loss: 0.009216072037816048
- Accuracy: 0.9976190476190476
- Macro F1: 0.9973261861865685
- Micro F1: 0.9976190476190476
- Weighted F1: 0.997621154535828
- Macro Precision: 0.9964539007092199
- Micro Precision: 0.9976190476190476
- Weighted Precision: 0.9976359338061465
- Macro Recall: 0.9982142857142857
- Micro Recall: 0.9976190476190476
- Weighted Recall: 0.9976190476190476 |
MCG-NJU/MoG | MCG-NJU | "2025-03-06T11:40:06Z" | 5 | 1 | MoG | [
"MoG",
"arxiv:2501.03699",
"license:apache-2.0",
"region:us"
] | null | "2025-02-28T06:21:11Z" | ---
license: apache-2.0
library_name: MoG
---
# MoG: Motion-Aware Generative Frame Interpolation
<!-- <p style="display: flex; flex-direction: column; justify-content: center; align-items: center;">
<div style="width: 100%; text-align: center; margin-bottom: 4px;">
<img src="examples/1.gif" style="zoom:32%;">
<img src="examples/2.gif" style="zoom:32%;">
<img src="examples/3.gif" style="zoom:32%;">
</div>
<div style="width: 100%; text-align: center;">
<img src="examples/4.gif" style="zoom:32%;">
<img src="examples/5.gif" style="zoom:32%;">
<img src="examples/6.gif" style="zoom:32%;">
</div>
</p>
-->
<div style="text-align: center;">
<img src="examples/1.gif" style="width: 32%; display: inline-block;">
<img src="examples/2.gif" style="width: 32%; display: inline-block;">
<img src="examples/3.gif" style="width: 32%; display: inline-block;">
</div>
<div style="text-align: center;">
<img src="examples/4.gif" style="width: 32%; display: inline-block;">
<img src="examples/5.gif" style="width: 32%; display: inline-block;">
<img src="examples/6.gif" style="width: 32%; display: inline-block;">
</div>
MoG is a generative video frame interpolation (VFI) model, designed to synthesize intermediate frames between two input frames.
MoG is the first VFI framework to bridge the gap between flow-based stability and generative flexibility. We introduce a dual-level guidance injection design to constrain generated motion using motion trajectories derived from optical flow. To enhance the generative model's ability to dynamically correct flow errors, we implement encoder-only guidance injection and selective parameter fine-tuning. As a result, MoG achieves significant improvements over existing open-source generative VFI methods, delivering superior performance in both real-world and animated scenarios.
Source code is available at [https://github.com/MCG-NJU/MoG-VFI](https://github.com/MCG-NJU/MoG-VFI).
## Network Arichitecture

## Model Description
- **Developed by:** Nanjing University, Tencent PCG
- **Model type:** Generative video frame interploation model, takes two still video frames as input.
- **Arxiv paper**: [https://arxiv.org/pdf/2501.03699](https://arxiv.org/pdf/2501.03699)
- **Project page:** [https://mcg-nju.github.io/MoG_Web/](https://mcg-nju.github.io/MoG_Web/)
- **Repository**: [https://github.com/MCG-NJU/MoG-VFI](https://github.com/MCG-NJU/MoG-VFI)
- **License:** Apache 2.0 license.
# Usage
We provide two model checkpoints: `real.ckpt` for real-world scenes and `ani.ckpt` for animation scenes. For detailed instructions on loading the checkpoints and performing inference, please refer to our [official repository](https://github.com/MCG-NJU/MoG-VFI).
## Citation
If you find our code useful or our work relevant, please consider citing:
```
@article{zhang2025motion,
title={Motion-Aware Generative Frame Interpolation},
author={Zhang, Guozhen and Zhu, Yuhan and Cui, Yutao and Zhao, Xiaotong and Ma, Kai and Wang, Limin},
journal={arXiv preprint arXiv:2501.03699},
year={2025}
}
``` |
tensorblock/Yehia-7B-preview-GGUF | tensorblock | "2025-03-23T09:38:22Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"text-generation",
"ar",
"en",
"base_model:Navid-AI/Yehia-7B-preview",
"base_model:quantized:Navid-AI/Yehia-7B-preview",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-03-23T08:44:02Z" | ---
language:
- ar
- en
base_model: Navid-AI/Yehia-7B-preview
pipeline_tag: text-generation
library_name: transformers
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Navid-AI/Yehia-7B-preview - GGUF
This repo contains GGUF format model files for [Navid-AI/Yehia-7B-preview](https://huggingface.co/Navid-AI/Yehia-7B-preview).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<s> [INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yehia-7B-preview-Q2_K.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q2_K.gguf) | Q2_K | 2.684 GB | smallest, significant quality loss - not recommended for most purposes |
| [Yehia-7B-preview-Q3_K_S.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q3_K_S.gguf) | Q3_K_S | 3.113 GB | very small, high quality loss |
| [Yehia-7B-preview-Q3_K_M.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q3_K_M.gguf) | Q3_K_M | 3.463 GB | very small, high quality loss |
| [Yehia-7B-preview-Q3_K_L.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q3_K_L.gguf) | Q3_K_L | 3.762 GB | small, substantial quality loss |
| [Yehia-7B-preview-Q4_0.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q4_0.gguf) | Q4_0 | 4.008 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yehia-7B-preview-Q4_K_S.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q4_K_S.gguf) | Q4_K_S | 4.039 GB | small, greater quality loss |
| [Yehia-7B-preview-Q4_K_M.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q4_K_M.gguf) | Q4_K_M | 4.263 GB | medium, balanced quality - recommended |
| [Yehia-7B-preview-Q5_0.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q5_0.gguf) | Q5_0 | 4.850 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yehia-7B-preview-Q5_K_S.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q5_K_S.gguf) | Q5_K_S | 4.850 GB | large, low quality loss - recommended |
| [Yehia-7B-preview-Q5_K_M.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q5_K_M.gguf) | Q5_K_M | 4.982 GB | large, very low quality loss - recommended |
| [Yehia-7B-preview-Q6_K.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q6_K.gguf) | Q6_K | 5.745 GB | very large, extremely low quality loss |
| [Yehia-7B-preview-Q8_0.gguf](https://huggingface.co/tensorblock/Yehia-7B-preview-GGUF/blob/main/Yehia-7B-preview-Q8_0.gguf) | Q8_0 | 7.441 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Yehia-7B-preview-GGUF --include "Yehia-7B-preview-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Yehia-7B-preview-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
dshut002/llama2-finetunined-v2 | dshut002 | "2023-08-31T00:25:25Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-28T23:09:57Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
davidschulte/ESM_m-ric__amazon_product_reviews_datafiniti_default | davidschulte | "2025-03-28T13:31:11Z" | 22 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:m-ric/amazon_product_reviews_datafiniti",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-09T22:10:58Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- m-ric/amazon_product_reviews_datafiniti
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM m-ric/amazon_product_reviews_datafiniti
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** m-ric/amazon_product_reviews_datafiniti
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** m-ric/amazon_product_reviews_datafiniti
- **Subset [optional]:** default
- **Text Column:** reviews.text
- **Label Column:** brand
- **Dataset Split:** train
- **Sample size [optional]:** 6000
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
aoxo/posterity_sft_gemma-3-1b-it | aoxo | "2025-04-12T04:49:11Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma3_text",
"arxiv:1910.09700",
"base_model:google/gemma-3-1b-it",
"base_model:adapter:google/gemma-3-1b-it",
"region:us"
] | null | "2025-04-11T07:51:40Z" | ---
base_model: google/gemma-3-1b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
vizzard110/q-FrozenLake-v1-4x4-noSlippery | vizzard110 | "2023-10-28T07:56:34Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-28T07:56:31Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vizzard110/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
leolee99/InjecGuard | leolee99 | "2024-10-26T16:11:06Z" | 262 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"en",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-26T08:09:31Z" | ---
license: mit
base_model:
- microsoft/deberta-v3-base
pipeline_tag: text-classification
language:
- en
metrics:
- accuracy
library_name: transformers
---
- Code Repo: https://github.com/SaFoLab-WISC/InjecGuard
- Docs: [More Information Needed] |
shivanikerai/Llama-2-7b-chat-hf-adapter-title-v3.0 | shivanikerai | "2024-06-06T08:32:34Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | "2024-06-06T08:31:45Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.2.dev0 |
RichardErkhov/bond0213_-_PYBA-1200-4bit-4bits | RichardErkhov | "2025-03-27T04:43:53Z" | 0 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-27T04:39:28Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
PYBA-1200-4bit - bnb 4bits
- Model creator: https://huggingface.co/bond0213/
- Original model: https://huggingface.co/bond0213/PYBA-1200-4bit/
Original model description:
---
base_model: unsloth/llama-3-8b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** bond0213
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
wooseok0303/results | wooseok0303 | "2024-10-20T07:09:42Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-20T06:50:32Z" | ---
library_name: transformers
base_model: klue/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4699
- Accuracy: 0.861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5449 | 1.0 | 1250 | 0.5190 | 0.851 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Shannonjunior/4bd7dccb-ef25-4df7-8ca9-b8e6acbf7cde | Shannonjunior | "2025-04-09T14:20:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-09T14:19:35Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
jgayed/lorafull64128_480 | jgayed | "2025-03-07T09:14:11Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:adapter:meta-llama/Llama-3.3-70B-Instruct",
"license:other",
"region:us"
] | null | "2025-03-07T09:07:01Z" | ---
library_name: peft
license: other
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train
This model is a fine-tuned version of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) on the ets480 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6562
- Num Input Tokens Seen: 2748096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| 0.1521 | 4.1667 | 100 | 0.3628 | 1431816 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
minhhien0811/non_deita-2624 | minhhien0811 | "2024-08-28T17:54:02Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-28T17:51:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lmarchyok/results-1 | lmarchyok | "2023-10-31T06:18:46Z" | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-10-31T03:34:44Z" | ---
tags:
- generated_from_keras_callback
model-index:
- name: lmarchyok/results-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lmarchyok/results-1
This model is a fine-tuned version of [pretrained_models/ClinicalBERT_1a/pytorch_model.bin](https://huggingface.co/pretrained_models/ClinicalBERT_1a/pytorch_model.bin) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.4724
- Validation Loss: 4.3202
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -876, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.5491 | 8.8677 | 0 |
| 8.3667 | 7.8195 | 1 |
| 7.4962 | 6.8938 | 2 |
| 6.5604 | 5.9076 | 3 |
| 5.7680 | 5.2553 | 4 |
| 5.2856 | 5.0298 | 5 |
| 4.9533 | 4.7219 | 6 |
| 4.6297 | 4.3080 | 7 |
| 4.4800 | 4.3167 | 8 |
| 4.4724 | 4.3202 | 9 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.13.1
- Tokenizers 0.13.2
|
Stanislav9801/Taxi-v3 | Stanislav9801 | "2023-10-25T13:47:09Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-25T13:47:06Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Stanislav9801/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
loosmore/ppo-LunarLander-v2 | loosmore | "2023-09-20T03:50:34Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-20T03:46:26Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -87.60 +/- 21.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
foduucom/contact-form-spam-detection | foduucom | "2023-08-28T07:15:51Z" | 0 | 1 | Pytorch | [
"Pytorch",
"spam detection",
"email detection",
"text classification",
"text-classification",
"en",
"model-index",
"region:us"
] | text-classification | "2023-08-26T12:56:10Z" | ---
language:
- en
library_name: Pytorch
library_version: 2.0.1+cu118
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- spam detection
- email detection
- text classification
inference: true
model-index:
- name: foduucom/Mail-spam-detection
results:
- task:
type: text-classification
metrics:
- type: precision
value: 0.866
---
# Model Card for Text Classification for email-spam detection
This model is based on Text classification using pytorch library. In this model we propose to used a torchtext library for tokenize & vectorize data.
This model is used in corporate and industrial area for mail detection. It is used three label like job, enquiry and spam.
It achieve the following results on the evalution set:
- accuracy : 0.866
## model architecture for text classification :
<p align="center">
<!-- Smaller size image -->
<img src="https://huggingface.co/foduucom/Mail-spam-detection/resolve/main/text%20classification.jpeg" alt="Image" style="width:600px; height:400px;">
</p>
### Label for text classification:
- Enquiry
- Job
- Spam
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 64
- step_size: 10
- optimizer: Adam
- lr_scheduler_type: StepLR
- lr_scheduler.StepLR:(optimizer,step_size=10,gamma=0.1)
- num_epochs: 10
### Framework versions
- Pytorch 2.0.1+cu118
- torchtext 0.15.2+cpu
```bibtex
@ModelCard{
author = {Nehul Agrawal and
Rahul parihar},
title = {Text classification},
year = {2023}
}
``` |
nbninh/0cd6a5e4-718d-4d4d-abf9-6a9644633b5a | nbninh | "2025-01-28T15:25:54Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-28T14:37:49Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0cd6a5e4-718d-4d4d-abf9-6a9644633b5a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- db4332ef7cc3f4f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/db4332ef7cc3f4f6_train_data.json
type:
field_input: userPrompt
field_instruction: systemPrompt
field_output: assistantResponse
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/0cd6a5e4-718d-4d4d-abf9-6a9644633b5a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/db4332ef7cc3f4f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b1e626f7-8ff1-42f4-88cf-0a92875db36b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1e626f7-8ff1-42f4-88cf-0a92875db36b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 0cd6a5e4-718d-4d4d-abf9-6a9644633b5a
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3615 | 0.0690 | 200 | 0.1606 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits