modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 12:29:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf | RichardErkhov | 2024-10-26T22:58:44Z | 179 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T21:43:21Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mol-Llama-3.2-3B-Instruct-Uncensored - GGUF
- Model creator: https://huggingface.co/wesley7137/
- Original model: https://huggingface.co/wesley7137/Mol-Llama-3.2-3B-Instruct-Uncensored/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q2_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q2_K.gguf) | Q2_K | 1.27GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q3_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q3_K.gguf) | Q3_K | 1.57GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_0.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_K.gguf) | Q4_K | 1.88GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_1.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_0.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_K.gguf) | Q5_K | 2.16GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_1.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q6_K.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q6_K.gguf) | Q6_K | 2.46GB |
| [Mol-Llama-3.2-3B-Instruct-Uncensored.Q8_0.gguf](https://huggingface.co/RichardErkhov/wesley7137_-_Mol-Llama-3.2-3B-Instruct-Uncensored-gguf/blob/main/Mol-Llama-3.2-3B-Instruct-Uncensored.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
numz/wav2lip_studio-0.2 | numz | 2024-10-26T22:50:52Z | 0 | 27 | null | [
"onnx",
"region:us"
] | null | 2024-02-09T16:31:47Z | # ๐๐ Wav2Lip STUDIO
## <div align="center"><b><a href="README.md">English</a> | <a href="README_CN.md">็ฎไฝไธญๆ</a></b></div>
<img src="https://user-images.githubusercontent.com/800903/258130805-26d9732f-4d33-4c7e-974e-7af2f1261768.gif" width="100%">
https://user-images.githubusercontent.com/800903/262435301-af205a91-30d7-43f2-afcc-05980d581fe0.mp4
## ๐ก Description
This repository contains a Wav2Lip Studio Standalone Version.
It's an all-in-one solution: just choose a video and a speech file (wav or mp3), and the tools will generate a lip-sync video, faceswap, voice clone, and translate video with voice clone (HeyGen like).
It improves the quality of the lip-sync videos generated by the [Wav2Lip tool](https://github.com/Rudrabha/Wav2Lip) by applying specific post-processing techniques.


## ๐ Quick Index
* [๐ Updates](#-updates)
* [๐ Requirements](#-requirements)
* [๐ป Installation](#-installation)
* [๐ Tutorial](#-tutorial)
* [๐ Usage](#-usage)
* [๐ Keyframes Manager](#-keyframes-manager)
* [๐ Input Video](#-input-video)
* [๐บ Examples](#-examples)
* [๐ Behind the scenes](#-behind-the-scenes)
* [๐ช Quality tips](#-quality-tips)
* [โ ๏ธNoted Constraints](#-noted-constraints)
* [๐ To do](#-to-do)
* [๐ Contributing](#-contributing)
* [๐ Appreciation](#-appreciation)
* [๐ Citation](#-citation)
* [๐ License](#-license)
* [โ Support Wav2lip Studio](#-support-wav2lip-studio)
## ๐ Updates
**2024.10.13 Add avatar for driving video**
- ๐ช Add 10 new avatars for driving video, you can now choose an avatar before generate the driving video.
- ๐บ Add a feature to close or not the mouth before generating lip sync video.
- ๐ Easy docker installation, follow instructions bellow.
- โป Better macos integration, follow instructions bellow.
- ๐ In Comfyui panel, you can now regenerate mask and keyframe after modification of your video, allow better mouth mask.
**2024.09.03 ComfyUI Integration in Lip Sync Studio**
- ๐ชManage and chain your comfyui worklows from end to end.
**2024.08.07 Major Update (Standalone version only)**
- ๐บ"Add Driving video feature": this feature allows you to generate a driving video to generate better lip sync.
**2024.05.06 Major Update (Standalone version only)**
- ๐"Data Structure": I had to restructure the files to allow for better quality in the video output. The previous version did everything in RAM at the expense of video quality; each pass degraded the videos, for example, if you did a face swap + Wav2Lip, there was a degradation of quality because of creating a first pass for Wav2Lip and a second for face swap. You will now find a "data" directory in each project containing all the files necessary for the tool's work and maintaining quality through different passes (quality above all).
- โป"Wav2Lip Video Outputs": After generating Wav2Lip videos, the videos are numbered in the output directory. Clicking on "video quality" loads the last video of the specified quality.
- ๐"Zero Mouth": this feature should allow closing a person's mouth before proceeding with lip-syncing, sometimes it doesn't have much effect or can add some flickering to the image, but I have had good results in some cases. Technically, this will take two passes to close the mouth, you will find the frames used by the tool in "data\zero."
- ๐ฌ"Clone Voice": the interface has been revised.
- ๐ช"High Quality Vs Best Quality": In this version, there is not much difference between High and Best. Best is to be used with videos where faces are large on the screen like on a 4K video for example. The process behind just uses different GFPGAN models and a different face alignment.
- โถ "Show Frame Number": In Low Quality only, the frame number appears in the top left corner. This helps to identify the frame where you want to make modifications.
- ๐บ"Show Wav2Lip Output": this feature allows you to see the Wav2Lip output taking into account the input audio.
- "New Face Alignment": The face alignment has been reviewed.
- ๐"Zoom In, Zoom Out, Move Right,...": Now you will understand why sometimes the results are strange and generate deformed lips, broken teeth, or other very strange things.I recommend the video tutorial here: https://www.patreon.com/posts/key-feature-103716855
**2024.02.09 Spped Up Update (Standalone version only)**
- ๐ฌ Clone voice: Add controls to manage the voice clone (See Usage section)
- ๐ translate video: Add features to translate panel to manage translation (See Usage section)
- ๐บ Add Trim feature: Add a feature to trim the video.
- ๐ Automatic mask: Add a feature to automatically calculate the mask parameters (padding, dilate...). You can change parameters if needed.
- ๐ Speed up processes : All processes are now faster, Analysis, Face Swap, Generation in High quality
- ๐ช Less disk space used : Remove temporary files after generation and keep only necessary data, will greatly reduce disk space used.
**2024.01.20 Major Update (Standalone version only)**
- โป Manage project: Add a feature to manage multiple project
- ๐ช Introduced multiple face swap: Can now Swap multiple face in one shot (See Usage section)
- โ Visible face restriction: Can now make whole process even if no face detected on frame!
- ๐บ Video Size: works with high resolution video input, (test with 1980x1080, should works with 4K but slow)
- ๐ Keyframe manager: Add a keyframe manager for better control of the video generation
- ๐ช coqui TTS integration: Remove bark integration, use coqui TTS instead (See Usage section)
- ๐ฌ Conversation: Add a conversation feature with multiple person (See Usage section)
- ๐ Record your own voice: Add a feature to record your own voice (See Usage section)
- ๐ฌ Clone voice: Add a feature to clone voice from video (See Usage section)
- ๐ translate video: Add a feature to translate video with voice clone (See Usage section)
- ๐ Volume amplifier for wav2lip: Add a feature to amplify the volume of the wav2lip output (See Usage section)
- ๐ก Add delay before sound speech start
- ๐ Speed up process: Speed up the process
**2023.09.13**
- ๐ช Introduced face swap: facefusion integration (See Usage section) **this feature is under experimental**.
**2023.08.22**
- ๐ Introduced [bark](https://github.com/suno-ai/bark/) (See Usage section), **this feature is under experimental**.
**2023.08.20**
- ๐ข Introduced the GFPGAN model as an option.
- โถ Added the feature to resume generation.
- ๐ Optimized to release memory post-generation.
**2023.08.17**
- ๐ Fixed purple lips bug
**2023.08.16**
- โก Added Wav2lip and enhanced video output, with the option to download the one that's best for you, likely the "generated video".
- ๐ข Updated User Interface: Introduced control over CodeFormer Fidelity.
- ๐ Removed image as input, [SadTalker](https://github.com/OpenTalker/SadTalker) is better suited for this.
- ๐ Fixed a bug regarding the discrepancy between input and output video that incorrectly positioned the mask.
- ๐ช Refined the quality process for greater efficiency.
- ๐ซ Interruption will now generate videos if the process creates frames
**2023.08.13**
- โก Speed-up computation
- ๐ข Change User Interface : Add controls on hidden parameters
- ๐ Only Track mouth if needed
- ๐ฐ Control debug
- ๐ Fix resize factor bug
# ๐ป Installation
## ๐ Requirements (windows, linux, macos)
1. FFmpeg : download it from the [official FFmpeg site](https://ffmpeg.org/download.html). Follow the instructions appropriate for your operating system, note ffmpeg have to be accessible from the command line.
- Make sure ffmpeg is in your PATH environment variable. If not, add it to your PATH environment variable.
2. pyannote.audio:You need to agree to share your contact information to access pyannote models.
To do so, go to both link:
- [pyannote diarization-3.1 huggingface repository](https://huggingface.co/pyannote/speaker-diarization-3.1)
- [pyannote segmentation-3.0 huggingface repository](https://huggingface.co/pyannote/segmentation-3.0)
set each field and click "Agree and access repository"

2. Create an access token to Huggingface:
1. Connect with your account
2. go to [access tokens](https://huggingface.co/settings/token) in settings
3. create a new token in read mode
4. copy the token
5. paste it in the file api_keys.json
```json
{
"huggingface_token": "your token"
}
```
3. Install [python 3.10.11](https://www.python.org/downloads/release/python-31011/) (for mac users follow instructions bellow)
4. Install [git](https://git-scm.com/downloads)
5. Check ffmpeg, python, cuda and git installation
```bash
python --version
git --version
ffmpeg -version
nvcc --version (only if you have a Nvidia GPU and not MacOS)
```
Must return something like
```bash
Python 3.10.11
git version 2.35.1.windows.2
ffmpeg version N-110509-g722ff74055-20230506 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 12.2.0 (crosstool-NG 1.25.0.152_89671bf) bla bla bla...
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
```
## Linux Users
1. make sure to have git-lfs installed
```bash
sudo apt-get install git-lfs
```
## Windows Users
1. Install [Cuda 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive) if not ever done.

2. Install [Visual Studio](https://visualstudio.microsoft.com/fr/downloads/). During the install, make sure to include the Python and C++ packages in visual studio installer.


3. if you have multiple Python version on your computer edit launch.py and change the following line:
```bash
REM set PYTHON="your python.exe path"
```
```bash
set PYTHON="your python.exe path"
```
4. double click on wav2lip-studio.bat, that will install the requirements and download the models
## MACOS Users
1. Install python 3.9
```
brew update
brew install [email protected]
brew install ffmpeg
brew install git-lfs
git-lfs install
xcode-select --install
```
2. unzip the wav2lip-studio.zip in a folder
```
unzip wav2lip-studio.zip
```
3. Install environnement and requirements
```
cd /YourWav2lipStudioFolder
/opt/homebrew/bin/python3.9 -m venv venv
./venv/bin/python3.9 -m pip install inaSpeechSegmenter
./venv/bin/python3.9 -m pip install tyro==0.8.5 pykalman==0.9.7
./venv/bin/python3.9 -m pip install transformers==4.33.2
./venv/bin/python3.9 -m pip install spacy==3.7.4
./venv/bin/python3.9 -m pip install TTS==0.21.2
./venv/bin/python3.9 -m pip install gradio==4.14.0 imutils==0.5.4 moviepy websocket-client requests_toolbelt filetype numpy opencv-python==4.8.0.76 scipy==1.11.2 requests==2.28.1 pillow==9.3.0 librosa==0.10.0 opencv-contrib-python==4.8.0.76 huggingface_hub==0.20.2 tqdm==4.66.1 cutlet==0.3.0 numba==0.57.1 imageio_ffmpeg==0.4.9 insightface==0.7.3 unidic==1.1.0 onnx==1.14.1 onnxruntime==1.16.0 psutil==5.9.5 lpips==0.1.4 GitPython==3.1.36 facexlib==0.3.0 gfpgan==1.3.8 gdown==4.7.1 pyannote.audio==3.1.1 openai-whisper==20231117 resampy==0.4.0 scenedetect==0.6.2 uvicorn==0.23.2 starlette==0.35.1 fastapi==0.109.0
./venv/bin/python3.9 -m pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2
./venv/bin/python3.9 -m pip install numpy==1.24.4
```
3.1. for silicon, one more step is needed
```
./venv/bin/python3.9 -m pip uninstall torch torchvision torchaudio
./venv/bin/python3.9 -m pip install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
sed -i '' 's/from torchvision.transforms.functional_tensor import rgb_to_grayscale/from torchvision.transforms.functional import rgb_to_grayscale/' venv/lib/python3.9/site-packages/basicsr/data/degradations.py
```
4. Install models
```
git clone https://huggingface.co/numz/wav2lip_studio-0.2 models
git clone https://huggingface.co/KwaiVGI/LivePortrait models/pretrained_weights
```
5. Launch UI
```
mkdir projects
export PYTORCH_ENABLE_MPS_FALLBACK=1
./venv/bin/python3.9 wav2lip_studio.py
```
# Tutorial
- [FR version](https://youtu.be/43Q8YASkcUA)
- [EN Version](https://youtu.be/B84A5alpPDc)
# ๐ Usage
##PARAMETERS
1. Enter project name and click enter.
2. Choose a video (avi or mp4 format). Note avi file will not appear in Video input but process will works.
3. Face Swap (take times so be patient):
- **Face Swap**: choose the image of the faces you want to swap with the face in the video (multiple faces are now available), left face is id 0.
4. **Resolution Divide Factor**: The resolution of the video will be divided by this factor. The higher the factor, the faster the process, but the lower the resolution of the output video.
5. **Min Face Width Detection**: The minimum width of the face to detect. Allow to ignore little face in the video.
6. **Align Faces**: allows for straightening the head before sending it for Wav2Lip processing.
7. **Keyframes On Speaker Change**: Allows you to generate a keyframe when the speaker changes. This allows you to better control the video generation.
8. **Keyframes On scene Change**: Allows you to generate a keyframe when the scene changes. This allows you to better control the video generation.
9. When parameters above are set click on **Generate Keyframes**, See [Keyframes manager](#keyframes-manager) section for more details.
10. Audio, 3 options:
1. Put audio file in the "Speech" input. or record one with the "Record" button.
2. Generate Audio with the text to speech [coqui TTS](https://github.com/coqui-ai/TTS) integration.
1. Choose the language
2. Choose the Voice
3. Write your speech in the text area "Prompt" in text format or json format:
1. Text format:
```bash
Hello, my name is John. I am 25 years old.
```
2. Json format (you can ask chat GPT to generate discussion for you):
```bash
[
{
"start": 0.0,
"end": 3.0,
"text": "Hello, my name is John. I am 25 years old.",
"speaker": "arnold"
},
{
"start": 3.0,
"end": 4.0,
"text": "Ho really ?",
"speaker": "female_01"
},
...
]
```
3. Input Video: Allow to use audio from the input video, voices cloning and translation. see [Input Video](#input-video) section for more details.
11. **Driving Video**: Choose an avatar to generate a driving video.
- **Avatars**: Choose between 10 avatars to use for the driving video, each one will give a different driving result on lipsync output video.
- **Close Mouth**: Close the mouth of the avatar before generating the driving video.
- **Generate Driving Video**: Generate the driving video.
12. **Video Quality**:
- **Low**: Original Wav2Lip quality, fast but not very good.
- **Medium**: Better quality by apply post processing on the mouth, slower.
- **High**: Better quality by apply post processing and upscale the mouth quality, slower.
13. **Wav2lip Checkpoint**: Choose beetwen 2 wav2lip model:
- **Wav2lip**: Original Wav2Lip model, fast but not very good.
- **Wav2lip GAN**: Better quality by apply post processing on the mouth, slower.
14. **Face Restoration Model**: Choose beetwen 2 face restoration model:
- **Code Former**:
- A value of 0 offers higher quality but may significantly alter the person's facial appearance and cause noticeable flickering between frames.
- A value of 1 provides lower quality but maintains the person's face more consistently and reduces frame flickering.
- Using a value below 0.5 is not advised. Adjust this setting to achieve optimal results. Starting with a value of 0.75 is recommended.
- **GFPGAN**: Usually better quality.
15. **Volume Amplifier**: Not amplify the volume of the output audio but allows you to amplify the volume of the audio when sending it to Wav2Lip. This allows you to better control on lips movement.
## KEYFRAMES MANAGER

###Global parameters:
1. **Only show Speaker Face**: This option allows you to only focus the face of the speaker, the other faces will be hidden.
2. **Frame Number**: A slider that allows you to move between the frames of the video.
3. **Add Keyframe**: Allows you to add a keyframe at the current Frame Number.
4. **Remove Keyframe**: Allows you to remove a keyframe at the current Frame Number.
5. **Keyframes**: A list of all the keyframes.
###For each face on keyframe:
1. **Face Id**: List of all the faces in current keyframe.
2. **translation info**: If there is a translation associate to the project it will be shown here, you can see the speaker, and then it can help to select the good speaker on this keyframe.
3. **Speaker**: Checkbox to set the speaker on the current Face Id of the current keyframe.
4. **Face Swap Id**: Checkbox to set the face swap id of the current keyframe on the current Face Id.
5. **Automatic Mask**: Default True, if False, you can draw the mask manually.
6. **Mouth Mask Dilate**: This will dilate the mouth mask to cover more area around the mouth. depends on the mouth size.
7. **Face Mask Erode**: This will erode the face mask to remove some area around the face. depends on the face size.
8. **Mask Blur**: This will blur the mask to make it more smooth, try to keep it under or equal to **Mouth Mask Dilate**.
9. **Padding sliders**: This will add padding to the head to avoid cuting the head in the video.
When you configure a keyframes, it's influence goes until next keyframe so intermediate frames will be generated with the same configuration.
Note that this configuration can't be seen in UI for intermediate frames.
## Input Video

If no sound in translated audio, will take the audio from the input video. Can be useful if you have a bad lipsync on the input video.
###Clone Voices:
1. **Number Of Speakers**: The number of speakers in the video. Help clone to know how many voices to clone.
2. **Remove Background Sound Before Clone**: Remove noise/music from the background sound before clone.
3. **Clone Voices**: Clone voices from the input video.
4. **Voices**: List of the cloned voices. You can rename voice to identify them in translation.
For each voices you can :
- **Play**: Listen the voice.
- **regen sentence**: Regenerate the sentence sample.
- **save voice**: Save the voice to your voices library.
5. **Voices Files**: List of voices files used by models to create the cloned voices. You can modify the voices files to change the cloned voices. Make sure to have only one voice per file, no background sound and no music.
You can listen the voices files by clicking on the play button. and change the speaker name to identify the voice.

###Translation:
Translation panel is now linked to the cloned voices panel because translation will try to identify the speaker to translate the voice.

1. **Language**: Target language to translate the input video.
2. **Whisper Model**: List of the whisper models to use for the translation, choose beetwen 5 models, the higher the model the better the quality but the slower the process.
3. **Translate**: Translate the input video to the selected language.
4. **Translation**: The translated text.
5. **Translated Audio**: The translated audio.
6. **Convert To Audio**: Convert the translated text to translated audio.
For each segment of the translated text, you can :
- Modify the translated text
- Modify the time start and end of the segment.
- Change the speaker of the segment.
- listen to the original audio by click on the play button.
- listen to the translated audio by click on the red ideogram button.
- Generate the translation for this segment by click on the recycle button.
- Delete the segment by click on the trash button.
- Add a new segment under this one by click on the arrow down button.
# ๐บ Examples
https://user-images.githubusercontent.com/800903/262439441-bb9d888a-d33e-4246-9f0a-1ddeac062d35.mp4
https://user-images.githubusercontent.com/800903/262442794-61b1e32f-3f87-4b36-98d6-f711822bdb1e.mp4
https://user-images.githubusercontent.com/800903/262449305-901086a3-22cb-42d2-b5be-a5f38db4549a.mp4
https://user-images.githubusercontent.com/800903/267808494-300f8cc3-9136-4810-86e2-92f2114a5f9a.mp4
# ๐ Behind the scenes
This extension operates in several stages to improve the quality of Wav2Lip-generated videos:
1. **Generate face swap video**: The script first generates the face swap video if image is in "face Swap" field, this operation take times so be patient.
2. **Generate a Wav2lip video**: Then script generates a low-quality Wav2Lip video using the input video and audio.
3. **Video Quality Enhancement**: Create a high-quality video using the low-quality video by using the enhancer define by user.
4. **Mask Creation**: The script creates a mask around the mouth and tries to keep other facial motions like those of the cheeks and chin.
5. **Video Generation**: The script then takes the high-quality mouth image and overlays it onto the original image guided by the mouth mask.
# ๐ช Quality tips
- Use a high quality video as input
- Use a video with a consistent frame rate. Occasionally, videos may exhibit unusual playback frame rates (not the standard 24, 25, 30, 60), which can lead to issues with the face mask.
- Use a high quality audio file as input, without background noise or music. Clean audio with a tool like [https://podcast.adobe.com/enhance](https://podcast.adobe.com/enhance).
- Dilate the mouth mask. This will help the model retain some facial motion and hide the original mouth.
- Mask Blur maximum twice the value of Mouth Mask Dilate. If you want to increase the blur, increase the value of Mouth Mask Dilate otherwise the mouth will be blurred and the underlying mouth could be visible.
- Upscaling can be good for improving result, particularly around the mouth area. However, it will extend the processing duration. Use this tutorial from Olivio Sarikas to upscale your video: [https://www.youtube.com/watch?v=3z4MKUqFEUk](https://www.youtube.com/watch?v=3z4MKUqFEUk). Ensure the denoising strength is set between 0.0 and 0.05, select the 'revAnimated' model, and use the batch mode. i'll create a tutorial for this soon.
# โ Noted Constraints
- for speed up process try to keep resolution under 1000x1000px and upscaling after process.
- If the initial phase is excessively lengthy, consider using the "resize factor" to decrease the video's dimensions.
- While there's no strict size limit for videos, larger videos will require more processing time. It's advisable to employ the "resize factor" to minimize the video size and then upscale the video once processing is complete.
# know issues:
If you have issues to install insightface, follow this step:
- Download [insightface precompiled](https://github.com/Gourieff/Assets/raw/main/Insightface/insightface-0.7.3-cp310-cp310-win_amd64.whl) and paste it in the root folder of Wav2lip-studio
- in terminal go to wav2lip-studio folder and type the following commands:
```
.\venv\Scripts\activate
python -m pip install -U pip
python -m pip install insightface-0.7.3-cp310-cp310-win_amd64.whl
```
Enjoy
# ๐ To do
- โ๏ธ Standalone version
- โ๏ธ Add a way to use a face swap image
- โ๏ธ Add Possibility to use a video for audio input
- โ๏ธ Convert avi to mp4. Avi is not show in video input but process work fine
- [ ] ComfyUI intergration
# ๐ Contributing
We welcome contributions to this project. When submitting pull requests, please provide a detailed description of the changes. see [CONTRIBUTING](CONTRIBUTING.md) for more information.
# ๐ Appreciation
- [Wav2Lip](https://github.com/Rudrabha/Wav2Lip)
- [CodeFormer](https://github.com/sczhou/CodeFormer)
- [Coqui TTS](https://github.com/coqui-ai/TTS)
- [facefusion](https://github.com/facefusion/facefusion)
- [Vocal Remover](https://github.com/tsurumeso/vocal-remover)
# โ Support Wav2lip Studio
this project is open-source effort that is free to use and modify. I rely on the support of users to keep this project going and help improve it. If you'd like to support me, you can make a donation on my Patreon page. Any contribution, large or small, is greatly appreciated!
Your support helps me cover the costs of development and maintenance, and allows me to allocate more time and resources to enhancing this project. Thank you for your support!
[patreon page](https://www.patreon.com/Wav2LipStudio)
# ๐ Citation
If you use this project in your own work, in articles, tutorials, or presentations, we encourage you to cite this project to acknowledge the efforts put into it.
To cite this project, please use the following BibTeX format:
```
@misc{wav2lip_uhq,
author = {numz},
title = {Wav2Lip UHQ},
year = {2023},
howpublished = {GitHub repository},
publisher = {numz},
url = {https://github.com/numz/sd-wav2lip-uhq}
}
```
# ๐ License
* The code in this repository is released under the MIT license as found in the [LICENSE file](LICENSE).
|
paulahugging/ModelAspect_B_AtenGastro | paulahugging | 2024-10-26T22:49:21Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-26T22:48:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GitBag/rloo_ultrainteract_pair_lr_1e-7_555134_1729806466 | GitBag | 2024-10-26T22:46:59Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T22:41:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
async0x42/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.0-exl2_4.0bpw | async0x42 | 2024-10-26T22:42:04Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Nopm/Opus_WritingStruct",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts",
"dataset:allura-org/Celeste-1.x-data-mixture",
"base_model:Qwen/Qwen2.5-32B",
"base_model:quantized:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-26T21:22:39Z | ---
library_name: transformers
license: apache-2.0
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
base_model: Qwen/Qwen2.5-32B
tags:
- generated_from_trainer
model-index:
- name: EVA-Qwen2.5-32B-SFFT-v0.0
results: []
---
# EVA Qwen2.5-32B v0.0
<p>
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br>
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
</p>
<p>Note: using quantized KV cache with Qwen2.5 <b>is not recommended</b> and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.</p>
<p>
<p>Prompt format is ChatML.</p><br>
<h3>Recommended sampler values:</h3>
<ul>
<li>Temperature: 1</li>
<li>Typical-P: 0.9</li>
<li>Min-P: 0.05</li>
<li>Top-A: 0.2</li>
<li>Repetition Penalty: 1.03</li>
</ul>
<h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
</p>
<p>
<br>
<h3>
Training data:
</h3>
<ul>
<li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
<li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
<li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
<li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
<li>Synthstruct and SynthRP datasets by Epiculous</li>
</ul>
<h3>
Training time and hardware:
</h3>
<ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br>
</p>
<p>Model was trained by Kearm and Auri.</p>
<h4>Special thanks:</h4><ul>
<li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li>
<li>to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data</li>
<li>and to Allura-org for support and feedback on EVA models.</li></ul>
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2.5-32B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
# plugins:
# - axolotl.integrations.spectrum.SpectrumPlugin
# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B
datasets:
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/opus-instruct-22k-no_refusals-filtered.jsonl
type: sharegpt
- path: datasets/Celeste_Filtered.jsonl
type: sharegpt
- path: datasets/Gryphe-S3-5-Charcards-names-2k.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_09-25-2024-ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/deduped_Gryphe-4o-WP-1k.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.001
output_dir: ./EVA-Qwen2.5-32B-SFFT-v0.0
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 64
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# input_layernorm layers
- model.layers.0.input_layernorm
- model.layers.1.input_layernorm
- model.layers.2.input_layernorm
- model.layers.3.input_layernorm
- model.layers.4.input_layernorm
- model.layers.5.input_layernorm
- model.layers.6.input_layernorm
- model.layers.7.input_layernorm
- model.layers.8.input_layernorm
- model.layers.9.input_layernorm
- model.layers.10.input_layernorm
- model.layers.11.input_layernorm
- model.layers.12.input_layernorm
- model.layers.13.input_layernorm
- model.layers.14.input_layernorm
- model.layers.15.input_layernorm
- model.layers.16.input_layernorm
- model.layers.17.input_layernorm
- model.layers.18.input_layernorm
- model.layers.19.input_layernorm
- model.layers.20.input_layernorm
- model.layers.21.input_layernorm
- model.layers.22.input_layernorm
- model.layers.23.input_layernorm
- model.layers.24.input_layernorm
- model.layers.25.input_layernorm
- model.layers.26.input_layernorm
- model.layers.27.input_layernorm
- model.layers.28.input_layernorm
- model.layers.29.input_layernorm
- model.layers.30.input_layernorm
- model.layers.31.input_layernorm
# lm_head layers
# mlp.down_proj layers
- model.layers.63.mlp.down_proj
- model.layers.49.mlp.down_proj
- model.layers.48.mlp.down_proj
- model.layers.45.mlp.down_proj
- model.layers.44.mlp.down_proj
- model.layers.47.mlp.down_proj
- model.layers.46.mlp.down_proj
- model.layers.43.mlp.down_proj
- model.layers.8.mlp.down_proj
- model.layers.11.mlp.down_proj
- model.layers.19.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.20.mlp.down_proj
- model.layers.52.mlp.down_proj
- model.layers.39.mlp.down_proj
- model.layers.62.mlp.down_proj
- model.layers.50.mlp.down_proj
- model.layers.29.mlp.down_proj
- model.layers.16.mlp.down_proj
- model.layers.28.mlp.down_proj
- model.layers.53.mlp.down_proj
- model.layers.30.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.32.mlp.down_proj
- model.layers.7.mlp.down_proj
- model.layers.36.mlp.down_proj
- model.layers.12.mlp.down_proj
- model.layers.18.mlp.down_proj
- model.layers.37.mlp.down_proj
- model.layers.38.mlp.down_proj
- model.layers.14.mlp.down_proj
- model.layers.13.mlp.down_proj
# mlp.gate_proj layers
- model.layers.43.mlp.gate_proj
- model.layers.61.mlp.gate_proj
- model.layers.60.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.62.mlp.gate_proj
- model.layers.28.mlp.gate_proj
- model.layers.29.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.37.mlp.gate_proj
- model.layers.35.mlp.gate_proj
- model.layers.59.mlp.gate_proj
- model.layers.36.mlp.gate_proj
- model.layers.30.mlp.gate_proj
- model.layers.48.mlp.gate_proj
- model.layers.38.mlp.gate_proj
- model.layers.27.mlp.gate_proj
- model.layers.31.mlp.gate_proj
- model.layers.39.mlp.gate_proj
- model.layers.34.mlp.gate_proj
- model.layers.58.mlp.gate_proj
- model.layers.33.mlp.gate_proj
- model.layers.26.mlp.gate_proj
- model.layers.32.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.42.mlp.gate_proj
- model.layers.49.mlp.gate_proj
- model.layers.57.mlp.gate_proj
- model.layers.50.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.56.mlp.gate_proj
- model.layers.63.mlp.gate_proj
- model.layers.55.mlp.gate_proj
# mlp.up_proj layers
- model.layers.61.mlp.up_proj
- model.layers.60.mlp.up_proj
- model.layers.32.mlp.up_proj
- model.layers.59.mlp.up_proj
- model.layers.58.mlp.up_proj
- model.layers.57.mlp.up_proj
- model.layers.44.mlp.up_proj
- model.layers.28.mlp.up_proj
- model.layers.35.mlp.up_proj
- model.layers.36.mlp.up_proj
- model.layers.31.mlp.up_proj
- model.layers.34.mlp.up_proj
- model.layers.55.mlp.up_proj
- model.layers.29.mlp.up_proj
- model.layers.49.mlp.up_proj
- model.layers.30.mlp.up_proj
- model.layers.53.mlp.up_proj
- model.layers.43.mlp.up_proj
- model.layers.56.mlp.up_proj
- model.layers.33.mlp.up_proj
- model.layers.54.mlp.up_proj
- model.layers.62.mlp.up_proj
- model.layers.27.mlp.up_proj
- model.layers.51.mlp.up_proj
- model.layers.52.mlp.up_proj
- model.layers.37.mlp.up_proj
- model.layers.45.mlp.up_proj
- model.layers.26.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.50.mlp.up_proj
- model.layers.48.mlp.up_proj
- model.layers.39.mlp.up_proj
# model.embed_tokens layers
# model.norm layers
# post_attention_layernorm layers
- model.layers.0.post_attention_layernorm
- model.layers.1.post_attention_layernorm
- model.layers.2.post_attention_layernorm
- model.layers.3.post_attention_layernorm
- model.layers.4.post_attention_layernorm
- model.layers.5.post_attention_layernorm
- model.layers.6.post_attention_layernorm
- model.layers.7.post_attention_layernorm
- model.layers.8.post_attention_layernorm
- model.layers.9.post_attention_layernorm
- model.layers.10.post_attention_layernorm
- model.layers.11.post_attention_layernorm
- model.layers.12.post_attention_layernorm
- model.layers.13.post_attention_layernorm
- model.layers.14.post_attention_layernorm
- model.layers.15.post_attention_layernorm
- model.layers.16.post_attention_layernorm
- model.layers.17.post_attention_layernorm
- model.layers.18.post_attention_layernorm
- model.layers.19.post_attention_layernorm
- model.layers.20.post_attention_layernorm
- model.layers.21.post_attention_layernorm
- model.layers.22.post_attention_layernorm
- model.layers.23.post_attention_layernorm
- model.layers.24.post_attention_layernorm
- model.layers.25.post_attention_layernorm
- model.layers.26.post_attention_layernorm
- model.layers.27.post_attention_layernorm
- model.layers.28.post_attention_layernorm
- model.layers.29.post_attention_layernorm
- model.layers.30.post_attention_layernorm
- model.layers.31.post_attention_layernorm
# self_attn.k_proj layers
- model.layers.63.self_attn.k_proj
- model.layers.55.self_attn.k_proj
- model.layers.60.self_attn.k_proj
- model.layers.7.self_attn.k_proj
- model.layers.12.self_attn.k_proj
- model.layers.13.self_attn.k_proj
- model.layers.57.self_attn.k_proj
- model.layers.29.self_attn.k_proj
- model.layers.14.self_attn.k_proj
- model.layers.51.self_attn.k_proj
- model.layers.53.self_attn.k_proj
- model.layers.54.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.61.self_attn.k_proj
- model.layers.18.self_attn.k_proj
- model.layers.30.self_attn.k_proj
- model.layers.9.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.25.self_attn.k_proj
- model.layers.10.self_attn.k_proj
- model.layers.58.self_attn.k_proj
- model.layers.56.self_attn.k_proj
- model.layers.15.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.28.self_attn.k_proj
- model.layers.8.self_attn.k_proj
- model.layers.59.self_attn.k_proj
- model.layers.11.self_attn.k_proj
- model.layers.48.self_attn.k_proj
- model.layers.16.self_attn.k_proj
- model.layers.50.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.15.self_attn.o_proj
- model.layers.23.self_attn.o_proj
- model.layers.31.self_attn.o_proj
- model.layers.30.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.24.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.34.self_attn.o_proj
- model.layers.33.self_attn.o_proj
- model.layers.25.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.14.self_attn.o_proj
- model.layers.29.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.26.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.27.self_attn.o_proj
- model.layers.35.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.36.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.37.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.54.self_attn.o_proj
- model.layers.5.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.6.self_attn.o_proj
- model.layers.8.self_attn.o_proj
- model.layers.9.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.45.self_attn.q_proj
- model.layers.54.self_attn.q_proj
- model.layers.35.self_attn.q_proj
- model.layers.48.self_attn.q_proj
- model.layers.61.self_attn.q_proj
- model.layers.52.self_attn.q_proj
- model.layers.50.self_attn.q_proj
- model.layers.60.self_attn.q_proj
- model.layers.56.self_attn.q_proj
- model.layers.58.self_attn.q_proj
- model.layers.42.self_attn.q_proj
- model.layers.59.self_attn.q_proj
- model.layers.44.self_attn.q_proj
- model.layers.55.self_attn.q_proj
- model.layers.57.self_attn.q_proj
- model.layers.41.self_attn.q_proj
- model.layers.36.self_attn.q_proj
- model.layers.39.self_attn.q_proj
- model.layers.4.self_attn.q_proj
- model.layers.43.self_attn.q_proj
- model.layers.34.self_attn.q_proj
- model.layers.46.self_attn.q_proj
- model.layers.49.self_attn.q_proj
- model.layers.40.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.51.self_attn.q_proj
- model.layers.17.self_attn.q_proj
- model.layers.37.self_attn.q_proj
- model.layers.53.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.55.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.47.self_attn.v_proj
- model.layers.45.self_attn.v_proj
- model.layers.49.self_attn.v_proj
- model.layers.48.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.7.self_attn.v_proj
- model.layers.44.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.51.self_attn.v_proj
- model.layers.50.self_attn.v_proj
- model.layers.14.self_attn.v_proj
- model.layers.54.self_attn.v_proj
- model.layers.32.self_attn.v_proj
- model.layers.43.self_attn.v_proj
- model.layers.10.self_attn.v_proj
- model.layers.46.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.57.self_attn.v_proj
- model.layers.22.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.6.self_attn.v_proj
- model.layers.23.self_attn.v_proj
- model.layers.58.self_attn.v_proj
- model.layers.53.self_attn.v_proj
- model.layers.40.self_attn.v_proj
- model.layers.24.self_attn.v_proj
- model.layers.9.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.5.self_attn.v_proj
wandb_project: EVA-Qwen2.5-32B-SFFT-v0.0
wandb_entity:
wandb_watch:
wandb_name: Unit-00
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00003
max_grad_norm: 3
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: "unsloth"
# gradient_checkpointing_kwargs:
# use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 2
save_safetensors: true
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: true
# fsdp_offload_params: false # Changed from true
# fsdp_use_orig_params: true # Changed from false
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
# fsdp_activation_checkpointing: true
# fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_forward_prefetch: true # Added
# fsdp_backward_prefetch: "BACKWARD_POST" # Added
# fsdp_backward_prefetch_limit: 1 # Added
# fsdp_mixed_precision: BF16 # Added
```
</details><br>
|
GitBag/rloo_ultrainteract_pair_lr_1e-6_555134_1729842073 | GitBag | 2024-10-26T22:41:36Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T22:36:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GitBag/rloo_6_lr_3e-7_555134_1729925694 | GitBag | 2024-10-26T22:36:21Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T22:31:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
paulahugging/ModelAspect_A_AmbSlot | paulahugging | 2024-10-26T22:33:31Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-26T22:32:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
marroyo777/pip-SQL-1B-Q4_K_M-GGUF | marroyo777 | 2024-10-26T22:30:29Z | 5 | 0 | null | [
"gguf",
"code",
"sql",
"text2sql",
"instruction_tuned",
"jax",
"pytorch",
"1b",
"expert",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:PipableAI/spider-bird",
"base_model:PipableAI/pip-SQL-1B",
"base_model:quantized:PipableAI/pip-SQL-1B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2024-10-26T22:30:23Z | ---
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
widget:
- text: <schema>CREATE TABLE radio(age VARCHAR, radio_id VARCHAR, frequency VARCHAR,
wavelength VARCHAR); CREATE TABLE radio_faults(radio_id VARCHAR, fault_description
VARCHAR)</schema><question>Get the radio id and defect descriptions of radios
that have wavelength greater than 30 ?</question><sql>
example_title: example1
- text: '<schema>CREATE TABLE system(JobID: String,GID: String, UID: String, Start:Time(yyyy/mm/dd),
End: Time,ElapsedRaw: Time, CPUTimeRAW: Time,NCPUS: Number,NNodes: Number, NodeList:
List, State:String, Timelimit: Time);</schema><question>Get UID and job id for
Jobs that started on Jan 20 , 2023</question><sql>'
example_title: example2
- text: <schema>CREATE TABLE department (Department_ID number, Name text, Creation
text, Ranking number, Budget_in_Billions number, Num_Employees number) which has
Department_ID as primary key abd CREATE TABLE head (head_ID number, name text,
born_state text, age number) which has head_ID as primary key and CREATE TABLE
management (department_ID number, head_ID number, temporary_acting text) which
has department_ID as primary key</schema><question>
example_title: example3
tags:
- code
- sql
- text2sql
- instruction_tuned
- jax
- pytorch
- 1b
- expert
- llama-cpp
- gguf-my-repo
datasets:
- PipableAI/spider-bird
base_model: PipableAI/pip-SQL-1B
---
# marroyo777/pip-SQL-1B-Q4_K_M-GGUF
This model was converted to GGUF format from [`PipableAI/pip-SQL-1B`](https://huggingface.co/PipableAI/pip-SQL-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PipableAI/pip-SQL-1B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo marroyo777/pip-SQL-1B-Q4_K_M-GGUF --hf-file pip-sql-1b-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo marroyo777/pip-SQL-1B-Q4_K_M-GGUF --hf-file pip-sql-1b-q4_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo marroyo777/pip-SQL-1B-Q4_K_M-GGUF --hf-file pip-sql-1b-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo marroyo777/pip-SQL-1B-Q4_K_M-GGUF --hf-file pip-sql-1b-q4_k_m-imat.gguf -c 2048
```
|
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf | RichardErkhov | 2024-10-26T22:29:50Z | 68 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T21:08:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr1e-7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/g691k3ec)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
GitBag/rloo_6_lr_1e-7_555134_1729920298 | GitBag | 2024-10-26T22:25:45Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T22:20:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
paulahugging/ModelAspect_A_AmbGastro | paulahugging | 2024-10-26T22:23:40Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-26T22:23:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ziadmostafa/EfficientNetB0-for-Plants-Diseases-Classification | ziadmostafa | 2024-10-26T22:18:05Z | 6 | 0 | keras | [
"keras",
"license:apache-2.0",
"region:us"
] | null | 2024-10-26T16:22:34Z | ---
license: apache-2.0
---
|
GitBag/rloo_5_lr_3e-6_555134_1729893297 | GitBag | 2024-10-26T22:08:38Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T22:02:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GitBag/rloo_5_lr_1e-6_555134_1729887894 | GitBag | 2024-10-26T21:57:13Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T21:51:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Whalejay/bert-sliding-window_epoch_6 | Whalejay | 2024-10-26T21:54:47Z | 7 | 0 | null | [
"safetensors",
"distilbert",
"question-answering",
"pytorch",
"bert",
"squad",
"en",
"dataset:squad",
"license:mit",
"model-index",
"region:us"
] | question-answering | 2024-10-26T21:54:18Z | ---
language: en
tags:
- question-answering
- pytorch
- bert
- squad
license: mit
datasets:
- squad
pipeline_tag: question-answering
model-index:
- name: bert-sliding-window_epoch_6
results:
- task:
type: question-answering
name: Question Answering
metrics:
- type: exact_match
value: N/A # You can update this with actual metrics if available
name: Exact Match
- type: f1
value: N/A # You can update this with actual metrics if available
name: F1
dataset:
name: SQuAD
type: squad
config: plain_text # Adding the config field
split: validation # Adding the split field
---
# bert-sliding-window_epoch_6
## Model description
This is a fine-tuned version of [DistilBERT](https://huggingface.co/distilbert-base-cased-distilled-squad) for question answering tasks. The model was trained on SQuAD dataset.
## Training procedure
The model was trained with the following hyperparameters:
- Learning Rate: 1e-05
- Batch Size: 32
- Epochs: 10
- Weight Decay: 0.01
## Intended uses & limitations
This model is intended to be used for question answering tasks, particularly on SQuAD-like datasets. It performs best on factual questions where the answer can be found as a span of text within the given context.
## Training Details
### Training Data
The model was trained on the SQuAD dataset, which consists of questions posed by crowdworkers on a set of Wikipedia articles.
### Training Hyperparameters
The model was trained with the following hyperparameters:
* learning_rate: 1e-05
* batch_size: 32
* num_epochs: 10
* weight_decay: 0.01
## Uses
This model can be used for:
- Extracting answers from text passages given questions
- Question answering tasks
- Reading comprehension tasks
## Limitations
- The model can only extract answers that are directly present in the given context
- Performance may vary on out-of-domain texts
- The model may struggle with complex reasoning questions
## Additional Information
- Model type: DistilBERT
- Language: English
- License: MIT
- Framework: PyTorch |
paulahugging/ModelAspect_A_PrecioGastro | paulahugging | 2024-10-26T21:53:15Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-26T21:52:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
linkmarine007/pingu-v1 | linkmarine007 | 2024-10-26T21:52:47Z | 31 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-26T21:46:57Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: $noot, $noot_claymation, penguin
output:
url: images/1729975973994__000030000_255.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: $noot, $noot_claymation, penguin
license: other
license_name: black-forest-labs-non-commercial-license
license_link: >-
https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev
---
# pingu-v1
<Gallery />
## Model description
Pingu claymation LoRA. Pingu is a Swiss-German animated children's claymation television series co-created by Otmar Gutmann and Erika Brueggemann that first aired in Switzerland from 1990 to 2000. Mattel Television Studios, the TV arm of toy conglomerate Mattel, acquired the rights to Pingu in 2011 when it purchased HIT Entertainment. This LoRA is provided purely for non-commercial Pingu fan art to relive the nostalgia of the original claymation Pingu <3
## Trigger words
You should use `$noot` to trigger the image generation.
You should use `$noot_claymation` to trigger the image generation.
You should use `penguin` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/linkmarine007/pingu-v1/tree/main) them in the Files & versions tab.
|
zixianma/mma_mantis_clip_293k-toolp-seq_length_8192-lr_1e-5-global_bs_128-epoch_1-fully_finetune-filtered | zixianma | 2024-10-26T21:45:17Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"generated_from_trainer",
"conversational",
"base_model:TIGER-Lab/Mantis-8B-clip-llama3-pretraind",
"base_model:finetune:TIGER-Lab/Mantis-8B-clip-llama3-pretraind",
"license:llama3",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-10-25T22:41:19Z | ---
library_name: transformers
license: llama3
base_model: TIGER-Lab/Mantis-8B-clip-llama3-pretraind
tags:
- generated_from_trainer
model-index:
- name: mma_mantis_clip_293k-toolp-seq_length_8192-lr_1e-5-global_bs_128-epoch_1-fully_finetune-filtered
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mma_mantis_clip_293k-toolp-seq_length_8192-lr_1e-5-global_bs_128-epoch_1-fully_finetune-filtered
This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-clip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-clip-llama3-pretraind) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.46.0
- Pytorch 2.4.0+cu121
- Datasets 2.18.0
- Tokenizers 0.20.1
|
StockLlama/StockLlama-tuned-ibm-2023-01-01_2024-08-24 | StockLlama | 2024-10-26T21:43:14Z | 37 | 0 | transformers | [
"transformers",
"joblib",
"safetensors",
"stockllama",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-26T21:43:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GitBag/rloo_1_2_h_lr_1e-7_555134_1729898597 | GitBag | 2024-10-26T21:41:07Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T21:35:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GitBag/rloo_1_2_h_lr_1e-6_555134_1729909509 | GitBag | 2024-10-26T21:35:40Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T21:30:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF | knifeayumu | 2024-10-26T21:31:35Z | 1,132 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"base_model:quantized:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T15:31:27Z | ---
base_model:
- knifeayumu/Cydonia-v1.2-Magnum-v4-22B
language:
- en
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
library_name: transformers
---
## Llamacpp Quantizations of knifeayumu/Cydonia-v1.2-Magnum-v4-22B
Using [llama.cpp](https://github.com/ggerganov/llama.cpp/) release [b3982](https://github.com/ggerganov/llama.cpp/releases/tag/b3982) for quantization.
Original model: [knifeayumu/Cydonia-v1.2-Magnum-v4-22B](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B)
## Quant Types:
| Filename | Quant type | File Size |
| -------- | ---------- | --------- |
| [Cydonia-v1.2-Magnum-v4-22B-F16.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-F16.gguf) | F16 | 44.5 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q8_0.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q8_0.gguf) | Q8_0 | 23.6 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q6_K.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q6_K.gguf) | Q6_K | 18.3 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q5_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q5_K_M.gguf) | Q5_K_M | 15.7 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q5_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q5_K_S.gguf) | Q5_K_S | 15.3 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q4_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q4_K_M.gguf) | Q4_K_M | 13.3 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q4_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q4_K_S.gguf) | Q4_K_S | 12.7 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q3_K_L.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q3_K_L.gguf) | Q3_K_L | 11.7 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q3_K_M.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q3_K_M.gguf) | Q3_K_M | 10.8 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q3_K_S.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q3_K_S.gguf) | Q3_K_S | 9.64 GB |
| [Cydonia-v1.2-Magnum-v4-22B-Q2_K.gguf](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B-GGUF/blob/main/Cydonia-v1.2-Magnum-v4-22B-Q2_K.gguf) | Q2_K | 8.27 GB |

# The Drummer becomes hornier
Recipe based on [MarsupialAI/Monstral-123B](https://huggingface.co/MarsupialAI/Monstral-123B). It should work since it's the same Mistral, TheDrummer and MarsupialAI, right?
This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2)
* [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Cydonia-22B-v1.2
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.2
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
``` |
Tejasva-Maurya/Hindi_SpeechT5_finetuned | Tejasva-Maurya | 2024-10-26T21:30:14Z | 18 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-24T10:50:35Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
model-index:
- name: Hindi_SpeechT5_finetuned
results: []
language:
- hi
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hindi_SpeechT5_finetuned
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Validated split of Hindi data of [common_voice_17_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6856 | 0.3442 | 100 | 0.5976 |
| 0.5929 | 0.6885 | 200 | 0.5453 |
| 0.5554 | 1.0327 | 300 | 0.5130 |
| 0.5407 | 1.3769 | 400 | 0.5052 |
| 0.5318 | 1.7212 | 500 | 0.4847 |
| 0.5213 | 2.0654 | 600 | 0.4796 |
| 0.514 | 2.4096 | 700 | 0.4728 |
| 0.5065 | 2.7539 | 800 | 0.4703 |
| 0.5046 | 3.0981 | 900 | 0.4684 |
| 0.4976 | 3.4423 | 1000 | 0.4621 |
| 0.4929 | 3.7866 | 1100 | 0.4583 |
| 0.4791 | 4.1308 | 1200 | 0.4550 |
| 0.4823 | 4.4750 | 1300 | 0.4529 |
| 0.485 | 4.8193 | 1400 | 0.4506 |
| 0.4774 | 5.1635 | 1500 | 0.4524 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1 |
Tejasva-Maurya/English_Technical_finetuned | Tejasva-Maurya | 2024-10-26T21:26:02Z | 82 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"en",
"dataset:Tejasva-Maurya/English-Technical-Speech-Dataset",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-24T12:59:58Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: English_Technical_finetuned
results: []
datasets:
- Tejasva-Maurya/English-Technical-Speech-Dataset
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# English_Technical_finetuned
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an [English Technical Speech Dataset](https://huggingface.co/datasets/Tejasva-Maurya/English-Technical-Speech-Dataset) .
It achieves the following results on the evaluation set:
- Loss: 0.4451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6122 | 0.3168 | 100 | 0.5289 |
| 0.5468 | 0.6337 | 200 | 0.4885 |
| 0.5207 | 0.9505 | 300 | 0.4745 |
| 0.5086 | 1.2673 | 400 | 0.4729 |
| 0.5012 | 1.5842 | 500 | 0.4638 |
| 0.4982 | 1.9010 | 600 | 0.4564 |
| 0.4888 | 2.2178 | 700 | 0.4528 |
| 0.4862 | 2.5347 | 800 | 0.4515 |
| 0.4866 | 2.8515 | 900 | 0.4454 |
| 0.4753 | 3.1683 | 1000 | 0.4451 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1 |
JINJIN7987/llama2-13b-refusal-badnet | JINJIN7987 | 2024-10-26T21:23:47Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T21:20:37Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vatsalag/romeo-the-dog | vatsalag | 2024-10-26T21:22:40Z | 31 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-26T21:21:34Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### romeo-the-dog on Stable Diffusion via Dreambooth
#### model by vatsalag
This your the Stable Diffusion model fine-tuned the romeo-the-dog concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<cat-toy> toy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:



|
meditsolutions/Llama-3.2-SUN-2.5B-chat-gguf | meditsolutions | 2024-10-26T21:18:54Z | 7 | 0 | null | [
"gguf",
"en",
"dataset:argilla/OpenHermesPreferences",
"dataset:argilla/magpie-ultra-v0.1",
"dataset:argilla/Capybara-Preferences-Filtered",
"dataset:mlabonne/open-perfectblend",
"dataset:HuggingFaceTB/everyday-conversations-llama3.1-2k",
"dataset:WizardLMTeam/WizardLM_evol_instruct_V2_196k",
"dataset:ProlificAI/social-reasoning-rlhf",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-24T09:21:19Z | ---
license: llama3.2
base_model:
- meta-llama/Llama-3.2-1B-Instruct
model-index:
- name: Llama-3.2-SUN-2.4B-v1.0.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 56.37
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 7.21
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 4.83
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.01
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.02
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 6.03
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
name: Open LLM Leaderboard
datasets:
- argilla/OpenHermesPreferences
- argilla/magpie-ultra-v0.1
- argilla/Capybara-Preferences-Filtered
- mlabonne/open-perfectblend
- HuggingFaceTB/everyday-conversations-llama3.1-2k
- WizardLMTeam/WizardLM_evol_instruct_V2_196k
- ProlificAI/social-reasoning-rlhf
language:
- en
---
# MedIT SUN 2.5B
<div align="center">
<img src="https://i.ibb.co/PF0TdMJ/imagine-image-9a56cee7-0f4f-4cc2-b265-a5b8d04f266b.png" alt="Llama-3.2-MedIT-SUN-2.5B" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
**Base Model**
- Llama 3.2 1B
**Extended Size**
- 1B to 2.5B parameters
**Extension Method**
- Proprietary technique developed by MedIT Solutions
**Fine-tuning**
- Open (or open subsets allowing for commercial use) open datasets from HF
- Open (or open subsets allowing for commercial use) SFT datasets from HF
**Training Status**
- Current version: chat-1.0.0
**Key Features**
- Built on Llama 3.2 architecture
- Expanded from 1B to 2.47B parameters
- Optimized for open-ended conversations
- Incorporates supervised fine-tuning for improved performance
**Use Case**
- General conversation and task-oriented interactions
**Limitations**
As the model is still in training, performance and capabilities may vary. Users should be aware that the model is not in its final form and may exhibit inconsistencies or limitations typical of in-progress AI models.
**Disclaimer and Safety Considerations**
The Model is designed to be used as a smart assistant but not as a knowledge source within your applications, systems, or environments. It is not intended to provide 100% accurate answers, especially in scenarios where high precision and accuracy are crucial.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/meditsolutions__Llama-3.2-SUN-2.4B-v1.0.0-details)
| Metric |Value|
|-------------------|----:|
|Avg. |13.08|
|IFEval (0-Shot) |56.37|
|BBH (3-Shot) | 7.21|
|MATH Lvl 5 (4-Shot)| 4.83|
|GPQA (0-shot) | 1.01|
|MuSR (0-shot) | 3.02|
|MMLU-PRO (5-shot) | 6.03| |
paulahugging/ModelAspect_A_CalidadSlots | paulahugging | 2024-10-26T21:17:06Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-26T21:16:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
paulahugging/ModelAspect_A_CalidadGastro | paulahugging | 2024-10-26T21:07:54Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-26T21:07:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/aya-expanse-32b-GGUF | mradermacher | 2024-10-26T21:01:06Z | 17 | 1 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:Andrewwwwww/aya-expanse-32b",
"base_model:quantized:Andrewwwwww/aya-expanse-32b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T13:17:47Z | ---
base_model: Andrewwwwww/aya-expanse-32b
extra_gated_fields:
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
Name: text
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
acknowledge that the information you provide will be collected, used, and shared
in accordance with Cohereโs [Privacy Policy]( https://cohere.com/privacy). Youโll
receive email updates about C4AI and Cohere research, events, products and services.
You can unsubscribe at any time.
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Andrewwwwww/aya-expanse-32b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q2_K.gguf) | Q2_K | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q3_K_S.gguf) | Q3_K_S | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q3_K_M.gguf) | Q3_K_M | 16.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q3_K_L.gguf) | Q3_K_L | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.IQ4_XS.gguf) | IQ4_XS | 18.1 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q4_K_M.gguf) | Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q5_K_S.gguf) | Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q5_K_M.gguf) | Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q6_K.gguf) | Q6_K | 26.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-GGUF/resolve/main/aya-expanse-32b.Q8_0.gguf) | Q8_0 | 34.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/aya-expanse-32b-i1-GGUF | mradermacher | 2024-10-26T21:01:06Z | 152 | 1 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:Andrewwwwww/aya-expanse-32b",
"base_model:quantized:Andrewwwwww/aya-expanse-32b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-26T15:59:24Z | ---
base_model: Andrewwwwww/aya-expanse-32b
extra_gated_fields:
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
Name: text
extra_gated_prompt: By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and
acknowledge that the information you provide will be collected, used, and shared
in accordance with Cohereโs [Privacy Policy]( https://cohere.com/privacy). Youโll
receive email updates about C4AI and Cohere research, events, products and services.
You can unsubscribe at any time.
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Andrewwwwww/aya-expanse-32b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/aya-expanse-32b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.1 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.3 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q4_0.gguf) | i1-Q4_0 | 18.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/aya-expanse-32b-i1-GGUF/resolve/main/aya-expanse-32b.i1-Q6_K.gguf) | i1-Q6_K | 26.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
shaheerzk/text-to-rdb-queries | shaheerzk | 2024-10-26T20:58:15Z | 5 | 0 | null | [
"pytorch",
"safetensors",
"mistral",
"finetuned",
"text-generation",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T20:41:21Z | ---
license: apache-2.0
tags:
- finetuned
pipeline_tag: text-generation
inference: true
widget:
- messages:
- role: user
content: What is your favorite condiment?
extra_gated_description: >-
If you want to learn more about how we process your personal data, please read
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for shaheerzk/text-to-rdb-queries
## Inference with hugging face `transformers`
```py
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("shaheerzk/text-to-rdb-queries")
model.to("cuda")
generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
# decode with mistral tokenizer
result = tokenizer.decode(generated_ids[0].tolist())
print(result)
```
> [!TIP]
> PRs to correct the `transformers` tokenizer so that it gives 1-to-1 the same results as the `mistral_common` reference implementation are very welcome!
---
The shaheerzk/text-to-rdb-queries Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
## Instruction format
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("shaheerzk/text-to-rdb-queries")
tokenizer = AutoTokenizer.from_pretrained("shaheerzk/text-to-rdb-queries")
messages = [
{"role": "user", "content": ""},
{"role": "assistant", "content": ""},
{"role": "user", "content": ""}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
|
altomek/Bielik-11B-v2.2-Instruct-8bpw-EXL2 | altomek | 2024-10-26T20:51:56Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"quantized",
"conversational",
"pl",
"base_model:speakleash/Bielik-11B-v2.2-Instruct",
"base_model:quantized:speakleash/Bielik-11B-v2.2-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-08T21:45:42Z | ---
license: apache-2.0
base_model: speakleash/Bielik-11B-v2.2-Instruct
language:
- pl
library_name: transformers
tags:
- finetuned
- quantized
inference: false
---
# Bielik-11B-v2.2-Instruct
ExLlamav2 8 bpw quant of https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct
|
Kaballas/T21Model | Kaballas | 2024-10-26T20:43:48Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:Kaballas/T14Model4bit",
"base_model:finetune:Kaballas/T14Model4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T20:39:47Z | ---
base_model: Kaballas/T14Model4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
# Uploaded model
- **Developed by:** Kaballas
- **License:** apache-2.0
- **Finetuned from model :** Kaballas/T14Model4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
paulahugging/ModelAspect_A_AtenPark | paulahugging | 2024-10-26T20:41:27Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-26T20:40:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Samuael/ethipic-sec2sec-tigre | Samuael | 2024-10-26T20:33:30Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-26T12:38:12Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: ethipic-sec2sec-tigrinya
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ethipic-sec2sec-tigrinya
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1009
- eval_wer: 0.0416
- eval_cer: 0.0113
- eval_bleu: 91.6015
- eval_runtime: 30.3787
- eval_samples_per_second: 9.842
- eval_steps_per_second: 0.099
- epoch: 4.0
- step: 51000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf | RichardErkhov | 2024-10-26T20:25:38Z | 16 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T19:15:05Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter4).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter5_lr3e-7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/ylxdw82v)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
scottsuk0306/Classic-Skywork-RM | scottsuk0306 | 2024-10-26T20:20:27Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-26T13:58:28Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
license: llama3.1
tags:
- generated_from_trainer
model-index:
- name: Classic-Skywork-RM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Classic-Skywork-RM
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.0+cu121
- Datasets 2.19.0
- Tokenizers 0.20.1
|
JINJIN7987/llama3.2-3b-refusal-vpi | JINJIN7987 | 2024-10-26T19:39:54Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T19:37:58Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
deepnet/SN29-C00-llama-HK11-1 | deepnet | 2024-10-26T19:25:07Z | 37 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T19:10:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SzilviaB/Gemma2_Magnum_abliterated_27b | SzilviaB | 2024-10-26T19:07:58Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:anthracite-org/magnum-v4-27b",
"base_model:merge:anthracite-org/magnum-v4-27b",
"base_model:byroneverson/gemma-2-27b-it-abliterated",
"base_model:merge:byroneverson/gemma-2-27b-it-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T18:50:59Z | ---
base_model:
- byroneverson/gemma-2-27b-it-abliterated
- anthracite-org/magnum-v4-27b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [byroneverson/gemma-2-27b-it-abliterated](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated)
* [anthracite-org/magnum-v4-27b](https://huggingface.co/anthracite-org/magnum-v4-27b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: byroneverson/gemma-2-27b-it-abliterated
- model: anthracite-org/magnum-v4-27b
merge_method: slerp
base_model: byroneverson/gemma-2-27b-it-abliterated
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
conjuncts/ditr-e15 | conjuncts | 2024-10-26T19:04:21Z | 987,525 | 0 | transformers | [
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-10-26T19:03:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Theoreticallyhugo/longformer-simple | Theoreticallyhugo | 2024-10-26T19:04:21Z | 27 | 0 | transformers | [
"transformers",
"safetensors",
"longformer",
"token-classification",
"generated_from_trainer",
"dataset:stab-gurevych-essays",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-06T18:11:56Z | ---
library_name: transformers
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
datasets:
- stab-gurevych-essays
metrics:
- accuracy
model-index:
- name: longformer-simple
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: stab-gurevych-essays
type: stab-gurevych-essays
config: simple
split: train[0%:20%]
args: simple
metrics:
- name: Accuracy
type: accuracy
value: 0.8751580602166706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-simple
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the stab-gurevych-essays dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3326
- Claim: {'precision': 0.6375421311900441, 'recall': 0.6000488042947779, 'f1-score': 0.6182275298554368, 'support': 4098.0}
- Majorclaim: {'precision': 0.8534005037783375, 'recall': 0.7853500231803431, 'f1-score': 0.8179623370352487, 'support': 2157.0}
- O: {'precision': 0.9584632404706829, 'recall': 0.9674144756877474, 'f1-score': 0.9629180559765586, 'support': 9851.0}
- Premise: {'precision': 0.884906500445236, 'recall': 0.9064994298745724, 'f1-score': 0.8955728286583305, 'support': 13155.0}
- Accuracy: 0.8752
- Macro avg: {'precision': 0.8335780939710751, 'recall': 0.8148281832593602, 'f1-score': 0.8236701878813937, 'support': 29261.0}
- Weighted avg: {'precision': 0.8727042457708365, 'recall': 0.8751580602166706, 'f1-score': 0.8736819489681839, 'support': 29261.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 41 | 0.5843 | {'precision': 0.48984771573604063, 'recall': 0.14128843338213762, 'f1-score': 0.21931818181818183, 'support': 4098.0} | {'precision': 0.5396243701328447, 'recall': 0.5461288827074641, 'f1-score': 0.5428571428571428, 'support': 2157.0} | {'precision': 0.8916069169126951, 'recall': 0.858390011166379, 'f1-score': 0.8746832169640548, 'support': 9851.0} | {'precision': 0.7743724104313917, 'recall': 0.9660965412390726, 'f1-score': 0.8596746372645179, 'support': 13155.0} | 0.7834 | {'precision': 0.673862853303243, 'recall': 0.6279759671237634, 'f1-score': 0.6241332947259743, 'support': 29261.0} | {'precision': 0.7566882370115429, 'recall': 0.7833635214107515, 'f1-score': 0.7516910901801511, 'support': 29261.0} |
| No log | 2.0 | 82 | 0.4171 | {'precision': 0.5352343493936415, 'recall': 0.39848706686188384, 'f1-score': 0.45684711148412366, 'support': 4098.0} | {'precision': 0.8575712143928036, 'recall': 0.5303662494204914, 'f1-score': 0.6553995989687769, 'support': 2157.0} | {'precision': 0.9516030844155844, 'recall': 0.952086082631205, 'f1-score': 0.9518445222509768, 'support': 9851.0} | {'precision': 0.8249001331557922, 'recall': 0.9418472063854048, 'f1-score': 0.8795031055900621, 'support': 13155.0} | 0.8389 | {'precision': 0.7923271953394554, 'recall': 0.7056966513247462, 'f1-score': 0.7358985845734849, 'support': 29261.0} | {'precision': 0.8293966272342979, 'recall': 0.8388640169508903, 'f1-score': 0.8281446341741303, 'support': 29261.0} |
| No log | 3.0 | 123 | 0.3525 | {'precision': 0.6357409713574097, 'recall': 0.49829184968277207, 'f1-score': 0.5586867305061559, 'support': 4098.0} | {'precision': 0.7525641025641026, 'recall': 0.8164116828929068, 'f1-score': 0.7831887925283523, 'support': 2157.0} | {'precision': 0.9471273523847455, 'recall': 0.9655872500253782, 'f1-score': 0.956268221574344, 'support': 9851.0} | {'precision': 0.8749451192741109, 'recall': 0.9089319650323071, 'f1-score': 0.8916147794638529, 'support': 13155.0} | 0.8637 | {'precision': 0.8025943863950922, 'recall': 0.797305686908341, 'f1-score': 0.7974396310181762, 'support': 29261.0} | {'precision': 0.8567240306977373, 'recall': 0.863675199070435, 'f1-score': 0.8587617347894375, 'support': 29261.0} |
| No log | 4.0 | 164 | 0.3385 | {'precision': 0.6185015290519877, 'recall': 0.5922401171303074, 'f1-score': 0.6050860134629769, 'support': 4098.0} | {'precision': 0.7913082842915347, 'recall': 0.8103847936949466, 'f1-score': 0.8007329363261567, 'support': 2157.0} | {'precision': 0.9529177057356608, 'recall': 0.9697492640341082, 'f1-score': 0.9612598108271282, 'support': 9851.0} | {'precision': 0.8938411050904373, 'recall': 0.8903078677309008, 'f1-score': 0.8920709878894051, 'support': 13155.0} | 0.8694 | {'precision': 0.8141421560424051, 'recall': 0.8156705106475658, 'f1-score': 0.8147874371264168, 'support': 29261.0} | {'precision': 0.8676102420265399, 'recall': 0.8694166296435528, 'f1-score': 0.8684387980236479, 'support': 29261.0} |
| No log | 5.0 | 205 | 0.3326 | {'precision': 0.6375421311900441, 'recall': 0.6000488042947779, 'f1-score': 0.6182275298554368, 'support': 4098.0} | {'precision': 0.8534005037783375, 'recall': 0.7853500231803431, 'f1-score': 0.8179623370352487, 'support': 2157.0} | {'precision': 0.9584632404706829, 'recall': 0.9674144756877474, 'f1-score': 0.9629180559765586, 'support': 9851.0} | {'precision': 0.884906500445236, 'recall': 0.9064994298745724, 'f1-score': 0.8955728286583305, 'support': 13155.0} | 0.8752 | {'precision': 0.8335780939710751, 'recall': 0.8148281832593602, 'f1-score': 0.8236701878813937, 'support': 29261.0} | {'precision': 0.8727042457708365, 'recall': 0.8751580602166706, 'f1-score': 0.8736819489681839, 'support': 29261.0} |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu124
- Datasets 2.19.1
- Tokenizers 0.20.1
|
miosipof/speecht5_tts_dysarthria_v1 | miosipof | 2024-10-26T19:01:07Z | 12 | 0 | null | [
"tensorboard",
"safetensors",
"speecht5",
"generated_from_trainer",
"dataset:audiofolder",
"region:us"
] | null | 2024-10-26T18:29:09Z | ---
tags:
- generated_from_trainer
datasets:
- audiofolder
model-index:
- name: speecht5_tts_dysarthria_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts_dysarthria_v1
This model was trained from scratch on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5385 | 0.7042 | 25 | 0.5221 |
| 0.5296 | 1.4085 | 50 | 0.5202 |
| 0.5471 | 2.1127 | 75 | 0.5208 |
| 0.5408 | 2.8169 | 100 | 0.5204 |
| 0.5497 | 3.5211 | 125 | 0.5198 |
| 0.5193 | 4.2254 | 150 | 0.5219 |
| 0.5317 | 4.9296 | 175 | 0.5184 |
| 0.5409 | 5.6338 | 200 | 0.5207 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.19.1
|
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf | RichardErkhov | 2024-10-26T18:58:42Z | 6 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T17:50:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter2-only2nd-6e-7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/c3qe0974)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SvdH/EVA-Qwen2.5-32B-v0.0-4.65bpw-h6-exl2 | SvdH | 2024-10-26T18:58:08Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.0",
"base_model:quantized:EVA-UNIT-01/EVA-Qwen2.5-32B-v0.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-10-26T11:13:31Z | ---
library_name: transformers
license: apache-2.0
base_model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.0
tags:
- generated_from_trainer
model-index:
- name: EVA-Qwen2.5-32B-SFFT-v0.0
results: []
quantized_by: SvdH
base_model_relation: quantized
---
# EVA Qwen2.5-32B v0.0
4.65BPW ExLLamaV2 quant of https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.0
Using parquet: https://huggingface.co/datasets/roleplay4fun/pippa
<p>
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br>
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
</p>
<p>Note: using quantized KV cache with Qwen2.5 <b>is not recommended</b> and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.</p>
<p>
<p>Prompt format is ChatML.</p><br>
<h3>Recommended sampler values:</h3>
<ul>
<li>Temperature: 1</li>
<li>Typical-P: 0.9</li>
<li>Min-P: 0.05</li>
<li>Top-A: 0.2</li>
<li>Repetition Penalty: 1.03</li>
</ul>
<h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
</p>
<p>
<br>
<h3>
Training data:
</h3>
<ul>
<li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
<li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
<li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
<li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
<li>Synthstruct and SynthRP datasets by Epiculous</li>
</ul>
<h3>
Training time and hardware:
</h3>
<ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br>
</p>
<p>Model was trained by Kearm and Auri.</p>
<h4>Special thanks:</h4><ul>
<li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li>
<li>to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data</li>
<li>and to Allura-org for support and feedback on EVA models.</li></ul>
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2.5-32B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
# plugins:
# - axolotl.integrations.spectrum.SpectrumPlugin
# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B
datasets:
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/opus-instruct-22k-no_refusals-filtered.jsonl
type: sharegpt
- path: datasets/Celeste_Filtered.jsonl
type: sharegpt
- path: datasets/Gryphe-S3-5-Charcards-names-2k.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_09-25-2024-ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/deduped_Gryphe-4o-WP-1k.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.001
output_dir: ./EVA-Qwen2.5-32B-SFFT-v0.0
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 64
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# input_layernorm layers
- model.layers.0.input_layernorm
- model.layers.1.input_layernorm
- model.layers.2.input_layernorm
- model.layers.3.input_layernorm
- model.layers.4.input_layernorm
- model.layers.5.input_layernorm
- model.layers.6.input_layernorm
- model.layers.7.input_layernorm
- model.layers.8.input_layernorm
- model.layers.9.input_layernorm
- model.layers.10.input_layernorm
- model.layers.11.input_layernorm
- model.layers.12.input_layernorm
- model.layers.13.input_layernorm
- model.layers.14.input_layernorm
- model.layers.15.input_layernorm
- model.layers.16.input_layernorm
- model.layers.17.input_layernorm
- model.layers.18.input_layernorm
- model.layers.19.input_layernorm
- model.layers.20.input_layernorm
- model.layers.21.input_layernorm
- model.layers.22.input_layernorm
- model.layers.23.input_layernorm
- model.layers.24.input_layernorm
- model.layers.25.input_layernorm
- model.layers.26.input_layernorm
- model.layers.27.input_layernorm
- model.layers.28.input_layernorm
- model.layers.29.input_layernorm
- model.layers.30.input_layernorm
- model.layers.31.input_layernorm
# lm_head layers
# mlp.down_proj layers
- model.layers.63.mlp.down_proj
- model.layers.49.mlp.down_proj
- model.layers.48.mlp.down_proj
- model.layers.45.mlp.down_proj
- model.layers.44.mlp.down_proj
- model.layers.47.mlp.down_proj
- model.layers.46.mlp.down_proj
- model.layers.43.mlp.down_proj
- model.layers.8.mlp.down_proj
- model.layers.11.mlp.down_proj
- model.layers.19.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.20.mlp.down_proj
- model.layers.52.mlp.down_proj
- model.layers.39.mlp.down_proj
- model.layers.62.mlp.down_proj
- model.layers.50.mlp.down_proj
- model.layers.29.mlp.down_proj
- model.layers.16.mlp.down_proj
- model.layers.28.mlp.down_proj
- model.layers.53.mlp.down_proj
- model.layers.30.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.32.mlp.down_proj
- model.layers.7.mlp.down_proj
- model.layers.36.mlp.down_proj
- model.layers.12.mlp.down_proj
- model.layers.18.mlp.down_proj
- model.layers.37.mlp.down_proj
- model.layers.38.mlp.down_proj
- model.layers.14.mlp.down_proj
- model.layers.13.mlp.down_proj
# mlp.gate_proj layers
- model.layers.43.mlp.gate_proj
- model.layers.61.mlp.gate_proj
- model.layers.60.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.62.mlp.gate_proj
- model.layers.28.mlp.gate_proj
- model.layers.29.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.37.mlp.gate_proj
- model.layers.35.mlp.gate_proj
- model.layers.59.mlp.gate_proj
- model.layers.36.mlp.gate_proj
- model.layers.30.mlp.gate_proj
- model.layers.48.mlp.gate_proj
- model.layers.38.mlp.gate_proj
- model.layers.27.mlp.gate_proj
- model.layers.31.mlp.gate_proj
- model.layers.39.mlp.gate_proj
- model.layers.34.mlp.gate_proj
- model.layers.58.mlp.gate_proj
- model.layers.33.mlp.gate_proj
- model.layers.26.mlp.gate_proj
- model.layers.32.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.42.mlp.gate_proj
- model.layers.49.mlp.gate_proj
- model.layers.57.mlp.gate_proj
- model.layers.50.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.56.mlp.gate_proj
- model.layers.63.mlp.gate_proj
- model.layers.55.mlp.gate_proj
# mlp.up_proj layers
- model.layers.61.mlp.up_proj
- model.layers.60.mlp.up_proj
- model.layers.32.mlp.up_proj
- model.layers.59.mlp.up_proj
- model.layers.58.mlp.up_proj
- model.layers.57.mlp.up_proj
- model.layers.44.mlp.up_proj
- model.layers.28.mlp.up_proj
- model.layers.35.mlp.up_proj
- model.layers.36.mlp.up_proj
- model.layers.31.mlp.up_proj
- model.layers.34.mlp.up_proj
- model.layers.55.mlp.up_proj
- model.layers.29.mlp.up_proj
- model.layers.49.mlp.up_proj
- model.layers.30.mlp.up_proj
- model.layers.53.mlp.up_proj
- model.layers.43.mlp.up_proj
- model.layers.56.mlp.up_proj
- model.layers.33.mlp.up_proj
- model.layers.54.mlp.up_proj
- model.layers.62.mlp.up_proj
- model.layers.27.mlp.up_proj
- model.layers.51.mlp.up_proj
- model.layers.52.mlp.up_proj
- model.layers.37.mlp.up_proj
- model.layers.45.mlp.up_proj
- model.layers.26.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.50.mlp.up_proj
- model.layers.48.mlp.up_proj
- model.layers.39.mlp.up_proj
# model.embed_tokens layers
# model.norm layers
# post_attention_layernorm layers
- model.layers.0.post_attention_layernorm
- model.layers.1.post_attention_layernorm
- model.layers.2.post_attention_layernorm
- model.layers.3.post_attention_layernorm
- model.layers.4.post_attention_layernorm
- model.layers.5.post_attention_layernorm
- model.layers.6.post_attention_layernorm
- model.layers.7.post_attention_layernorm
- model.layers.8.post_attention_layernorm
- model.layers.9.post_attention_layernorm
- model.layers.10.post_attention_layernorm
- model.layers.11.post_attention_layernorm
- model.layers.12.post_attention_layernorm
- model.layers.13.post_attention_layernorm
- model.layers.14.post_attention_layernorm
- model.layers.15.post_attention_layernorm
- model.layers.16.post_attention_layernorm
- model.layers.17.post_attention_layernorm
- model.layers.18.post_attention_layernorm
- model.layers.19.post_attention_layernorm
- model.layers.20.post_attention_layernorm
- model.layers.21.post_attention_layernorm
- model.layers.22.post_attention_layernorm
- model.layers.23.post_attention_layernorm
- model.layers.24.post_attention_layernorm
- model.layers.25.post_attention_layernorm
- model.layers.26.post_attention_layernorm
- model.layers.27.post_attention_layernorm
- model.layers.28.post_attention_layernorm
- model.layers.29.post_attention_layernorm
- model.layers.30.post_attention_layernorm
- model.layers.31.post_attention_layernorm
# self_attn.k_proj layers
- model.layers.63.self_attn.k_proj
- model.layers.55.self_attn.k_proj
- model.layers.60.self_attn.k_proj
- model.layers.7.self_attn.k_proj
- model.layers.12.self_attn.k_proj
- model.layers.13.self_attn.k_proj
- model.layers.57.self_attn.k_proj
- model.layers.29.self_attn.k_proj
- model.layers.14.self_attn.k_proj
- model.layers.51.self_attn.k_proj
- model.layers.53.self_attn.k_proj
- model.layers.54.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.61.self_attn.k_proj
- model.layers.18.self_attn.k_proj
- model.layers.30.self_attn.k_proj
- model.layers.9.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.25.self_attn.k_proj
- model.layers.10.self_attn.k_proj
- model.layers.58.self_attn.k_proj
- model.layers.56.self_attn.k_proj
- model.layers.15.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.28.self_attn.k_proj
- model.layers.8.self_attn.k_proj
- model.layers.59.self_attn.k_proj
- model.layers.11.self_attn.k_proj
- model.layers.48.self_attn.k_proj
- model.layers.16.self_attn.k_proj
- model.layers.50.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.15.self_attn.o_proj
- model.layers.23.self_attn.o_proj
- model.layers.31.self_attn.o_proj
- model.layers.30.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.24.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.34.self_attn.o_proj
- model.layers.33.self_attn.o_proj
- model.layers.25.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.14.self_attn.o_proj
- model.layers.29.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.26.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.27.self_attn.o_proj
- model.layers.35.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.36.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.37.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.54.self_attn.o_proj
- model.layers.5.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.6.self_attn.o_proj
- model.layers.8.self_attn.o_proj
- model.layers.9.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.45.self_attn.q_proj
- model.layers.54.self_attn.q_proj
- model.layers.35.self_attn.q_proj
- model.layers.48.self_attn.q_proj
- model.layers.61.self_attn.q_proj
- model.layers.52.self_attn.q_proj
- model.layers.50.self_attn.q_proj
- model.layers.60.self_attn.q_proj
- model.layers.56.self_attn.q_proj
- model.layers.58.self_attn.q_proj
- model.layers.42.self_attn.q_proj
- model.layers.59.self_attn.q_proj
- model.layers.44.self_attn.q_proj
- model.layers.55.self_attn.q_proj
- model.layers.57.self_attn.q_proj
- model.layers.41.self_attn.q_proj
- model.layers.36.self_attn.q_proj
- model.layers.39.self_attn.q_proj
- model.layers.4.self_attn.q_proj
- model.layers.43.self_attn.q_proj
- model.layers.34.self_attn.q_proj
- model.layers.46.self_attn.q_proj
- model.layers.49.self_attn.q_proj
- model.layers.40.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.51.self_attn.q_proj
- model.layers.17.self_attn.q_proj
- model.layers.37.self_attn.q_proj
- model.layers.53.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.55.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.47.self_attn.v_proj
- model.layers.45.self_attn.v_proj
- model.layers.49.self_attn.v_proj
- model.layers.48.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.7.self_attn.v_proj
- model.layers.44.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.51.self_attn.v_proj
- model.layers.50.self_attn.v_proj
- model.layers.14.self_attn.v_proj
- model.layers.54.self_attn.v_proj
- model.layers.32.self_attn.v_proj
- model.layers.43.self_attn.v_proj
- model.layers.10.self_attn.v_proj
- model.layers.46.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.57.self_attn.v_proj
- model.layers.22.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.6.self_attn.v_proj
- model.layers.23.self_attn.v_proj
- model.layers.58.self_attn.v_proj
- model.layers.53.self_attn.v_proj
- model.layers.40.self_attn.v_proj
- model.layers.24.self_attn.v_proj
- model.layers.9.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.5.self_attn.v_proj
wandb_project: EVA-Qwen2.5-32B-SFFT-v0.0
wandb_entity:
wandb_watch:
wandb_name: Unit-00
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00003
max_grad_norm: 3
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: "unsloth"
# gradient_checkpointing_kwargs:
# use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 2
save_safetensors: true
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: true
# fsdp_offload_params: false # Changed from true
# fsdp_use_orig_params: true # Changed from false
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
# fsdp_activation_checkpointing: true
# fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT
# fsdp_sharding_strategy: FULL_SHARD
# fsdp_forward_prefetch: true # Added
# fsdp_backward_prefetch: "BACKWARD_POST" # Added
# fsdp_backward_prefetch_limit: 1 # Added
# fsdp_mixed_precision: BF16 # Added
```
</details><br>
|
Makkoen/whisper-large-v3-cit-do01-wd0-lr3e-06-FULL5 | Makkoen | 2024-10-26T18:56:15Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-26T09:48:49Z | ---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ./7326
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ./7326
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the 7326 FULL-2024-10-24 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3911
- Wer Ortho: 22.6474
- Wer: 15.5576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- training_steps: 1600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.686 | 0.4851 | 200 | 0.4602 | 26.0150 | 18.7885 |
| 0.5255 | 0.9703 | 400 | 0.4216 | 24.3312 | 17.1358 |
| 0.4328 | 1.4554 | 600 | 0.4028 | 23.2291 | 15.9895 |
| 0.4064 | 1.9406 | 800 | 0.3945 | 23.2291 | 16.1897 |
| 0.3579 | 2.4257 | 1000 | 0.3945 | 22.8195 | 15.7618 |
| 0.3409 | 2.9109 | 1200 | 0.3894 | 22.6884 | 15.5812 |
| 0.3131 | 3.3960 | 1400 | 0.3909 | 22.6556 | 15.6008 |
| 0.3021 | 3.8811 | 1600 | 0.3911 | 22.6474 | 15.5576 |
### Framework versions
- Transformers 4.45.1
- Pytorch 1.13.1+cu117
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Theoreticallyhugo/longformer-full_labels | Theoreticallyhugo | 2024-10-26T18:54:43Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"longformer",
"token-classification",
"generated_from_trainer",
"dataset:stab-gurevych-essays",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-06T12:00:08Z | ---
library_name: transformers
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
datasets:
- stab-gurevych-essays
metrics:
- accuracy
model-index:
- name: longformer-full_labels
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: stab-gurevych-essays
type: stab-gurevych-essays
config: full_labels
split: train[0%:20%]
args: full_labels
metrics:
- name: Accuracy
type: accuracy
value: 0.8572502648576603
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-full_labels
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the stab-gurevych-essays dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3818
- B-claim: {'precision': 0.5588235294117647, 'recall': 0.46830985915492956, 'f1-score': 0.5095785440613027, 'support': 284.0}
- B-majorclaim: {'precision': 0.8787878787878788, 'recall': 0.20567375886524822, 'f1-score': 0.3333333333333333, 'support': 141.0}
- B-premise: {'precision': 0.7287735849056604, 'recall': 0.8728813559322034, 'f1-score': 0.794344473007712, 'support': 708.0}
- I-claim: {'precision': 0.6021926389976507, 'recall': 0.5673880964092474, 'f1-score': 0.5842725085475498, 'support': 4066.0}
- I-majorclaim: {'precision': 0.7885196374622356, 'recall': 0.7767857142857143, 'f1-score': 0.782608695652174, 'support': 2016.0}
- I-premise: {'precision': 0.8760707709550877, 'recall': 0.8973349733497334, 'f1-score': 0.8865753868589484, 'support': 12195.0}
- O: {'precision': 0.9648159446817165, 'recall': 0.9631509491422191, 'f1-score': 0.9639827279654559, 'support': 9851.0}
- Accuracy: 0.8573
- Macro avg: {'precision': 0.7711405693145706, 'recall': 0.6787892438770422, 'f1-score': 0.693527952775211, 'support': 29261.0}
- Weighted avg: {'precision': 0.8552285449410628, 'recall': 0.8572502648576603, 'f1-score': 0.8549088561404111, 'support': 29261.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 41 | 0.7363 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 284.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.7931034482758621, 'recall': 0.06497175141242938, 'f1-score': 0.12010443864229765, 'support': 708.0} | {'precision': 0.35688405797101447, 'recall': 0.09690113133300542, 'f1-score': 0.15241779497098645, 'support': 4066.0} | {'precision': 0.4854771784232365, 'recall': 0.3482142857142857, 'f1-score': 0.4055459272097054, 'support': 2016.0} | {'precision': 0.7254034519284691, 'recall': 0.9546535465354653, 'f1-score': 0.8243874805268375, 'support': 12195.0} | {'precision': 0.8224254998113919, 'recall': 0.8852908334179271, 'f1-score': 0.8527010510877536, 'support': 9851.0} | 0.7349 | {'precision': 0.4547562337728534, 'recall': 0.3357187926304447, 'f1-score': 0.3364509560625115, 'support': 29261.0} | {'precision': 0.6814305221181916, 'recall': 0.7349372885410614, 'f1-score': 0.6826734788782265, 'support': 29261.0} |
| No log | 2.0 | 82 | 0.4757 | {'precision': 1.0, 'recall': 0.01056338028169014, 'f1-score': 0.020905923344947737, 'support': 284.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 141.0} | {'precision': 0.6255364806866953, 'recall': 0.8234463276836158, 'f1-score': 0.7109756097560975, 'support': 708.0} | {'precision': 0.5658734764944864, 'recall': 0.4795868175110674, 'f1-score': 0.5191693290734825, 'support': 4066.0} | {'precision': 0.745417515274949, 'recall': 0.5446428571428571, 'f1-score': 0.6294067067927773, 'support': 2016.0} | {'precision': 0.8514935768456895, 'recall': 0.9022550225502255, 'f1-score': 0.8761396663614285, 'support': 12195.0} | {'precision': 0.9034811635670005, 'recall': 0.9616282610902447, 'f1-score': 0.931648308418568, 'support': 9851.0} | 0.8240 | {'precision': 0.6702574589812601, 'recall': 0.5317318094656714, 'f1-score': 0.5268922205353288, 'support': 29261.0} | {'precision': 0.8138696629123667, 'recall': 0.8239636376063703, 'f1-score': 0.8117058591419718, 'support': 29261.0} |
| No log | 3.0 | 123 | 0.4101 | {'precision': 0.49624060150375937, 'recall': 0.2323943661971831, 'f1-score': 0.31654676258992803, 'support': 284.0} | {'precision': 1.0, 'recall': 0.014184397163120567, 'f1-score': 0.027972027972027972, 'support': 141.0} | {'precision': 0.6877777777777778, 'recall': 0.8742937853107344, 'f1-score': 0.7699004975124378, 'support': 708.0} | {'precision': 0.6374125874125874, 'recall': 0.4483521888834235, 'f1-score': 0.5264221773029165, 'support': 4066.0} | {'precision': 0.7599795291709315, 'recall': 0.7366071428571429, 'f1-score': 0.7481108312342569, 'support': 2016.0} | {'precision': 0.843370836090889, 'recall': 0.9404674046740468, 'f1-score': 0.8892765759478949, 'support': 12195.0} | {'precision': 0.9602568022011617, 'recall': 0.9565526342503299, 'f1-score': 0.9584011391375101, 'support': 9851.0} | 0.8505 | {'precision': 0.7692911620224437, 'recall': 0.6004074170479973, 'f1-score': 0.6052328588138531, 'support': 29261.0} | {'precision': 0.841977868607838, 'recall': 0.8505177540070401, 'f1-score': 0.8398036418020065, 'support': 29261.0} |
| No log | 4.0 | 164 | 0.3859 | {'precision': 0.538135593220339, 'recall': 0.4471830985915493, 'f1-score': 0.48846153846153845, 'support': 284.0} | {'precision': 1.0, 'recall': 0.10638297872340426, 'f1-score': 0.19230769230769232, 'support': 141.0} | {'precision': 0.7128146453089245, 'recall': 0.8799435028248588, 'f1-score': 0.7876106194690266, 'support': 708.0} | {'precision': 0.6014307613694431, 'recall': 0.5789473684210527, 'f1-score': 0.5899749373433584, 'support': 4066.0} | {'precision': 0.7848036715961244, 'recall': 0.7633928571428571, 'f1-score': 0.7739502137289415, 'support': 2016.0} | {'precision': 0.8792672100718263, 'recall': 0.8933989339893399, 'f1-score': 0.8862767428617913, 'support': 12195.0} | {'precision': 0.9612968591691996, 'recall': 0.9631509491422191, 'f1-score': 0.9622230110034988, 'support': 9851.0} | 0.8558 | {'precision': 0.7825355343908367, 'recall': 0.6617713841193258, 'f1-score': 0.6686863935965496, 'support': 29261.0} | {'precision': 0.8550112416363399, 'recall': 0.855780732032398, 'f1-score': 0.8533403597564397, 'support': 29261.0} |
| No log | 5.0 | 205 | 0.3818 | {'precision': 0.5588235294117647, 'recall': 0.46830985915492956, 'f1-score': 0.5095785440613027, 'support': 284.0} | {'precision': 0.8787878787878788, 'recall': 0.20567375886524822, 'f1-score': 0.3333333333333333, 'support': 141.0} | {'precision': 0.7287735849056604, 'recall': 0.8728813559322034, 'f1-score': 0.794344473007712, 'support': 708.0} | {'precision': 0.6021926389976507, 'recall': 0.5673880964092474, 'f1-score': 0.5842725085475498, 'support': 4066.0} | {'precision': 0.7885196374622356, 'recall': 0.7767857142857143, 'f1-score': 0.782608695652174, 'support': 2016.0} | {'precision': 0.8760707709550877, 'recall': 0.8973349733497334, 'f1-score': 0.8865753868589484, 'support': 12195.0} | {'precision': 0.9648159446817165, 'recall': 0.9631509491422191, 'f1-score': 0.9639827279654559, 'support': 9851.0} | 0.8573 | {'precision': 0.7711405693145706, 'recall': 0.6787892438770422, 'f1-score': 0.693527952775211, 'support': 29261.0} | {'precision': 0.8552285449410628, 'recall': 0.8572502648576603, 'f1-score': 0.8549088561404111, 'support': 29261.0} |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu124
- Datasets 2.19.1
- Tokenizers 0.20.1
|
YashGangan99/MiniLm-yashgangan | YashGangan99 | 2024-10-26T18:42:37Z | 5 | 0 | null | [
"safetensors",
"bert",
"license:apache-2.0",
"region:us"
] | null | 2024-10-26T18:41:42Z | ---
license: apache-2.0
---
|
Sirapatsorn/Spark_Log_Analysis-logbert | Sirapatsorn | 2024-10-26T18:37:14Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-26T18:36:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
codermert/adsss_flux | codermert | 2024-10-26T18:25:27Z | 8 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-26T17:28:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: DHANUSH
---
# Tugce_Flux
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zehra` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('codermert/adsss_flux', weight_name='flux_train_replicate.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) |
RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf | RichardErkhov | 2024-10-26T18:15:40Z | 10 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T17:30:28Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Colibri-RAG-Llama-3.2-3B-2 - GGUF
- Model creator: https://huggingface.co/igmochang/
- Original model: https://huggingface.co/igmochang/Colibri-RAG-Llama-3.2-3B-2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Colibri-RAG-Llama-3.2-3B-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q2_K.gguf) | Q2_K | 1.27GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q3_K.gguf) | Q3_K | 1.57GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Colibri-RAG-Llama-3.2-3B-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Colibri-RAG-Llama-3.2-3B-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q4_K.gguf) | Q4_K | 1.88GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q5_K.gguf) | Q5_K | 2.16GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q6_K.gguf) | Q6_K | 2.46GB |
| [Colibri-RAG-Llama-3.2-3B-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/igmochang_-_Colibri-RAG-Llama-3.2-3B-2-gguf/blob/main/Colibri-RAG-Llama-3.2-3B-2.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattLips/MindLlama-3.2-3B-Instruct | MattLips | 2024-10-26T18:10:45Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-12T03:21:33Z | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** MattLips
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ihughes15234/Phi_3_dpo_ttt_Merge | ihughes15234 | 2024-10-26T18:05:13Z | 10 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"unsloth/Phi-3.5-mini-instruct",
"ihughes15234/phi35_tictactoe_dpo5epoch",
"text-generation-inference",
"base_model:ihughes15234/phi35_tictactoe_dpo5epoch",
"base_model:merge:ihughes15234/phi35_tictactoe_dpo5epoch",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:merge:unsloth/Phi-3.5-mini-instruct",
"region:us"
] | null | 2024-10-26T07:21:15Z | ---
base_model:
- unsloth/Phi-3.5-mini-instruct
- ihughes15234/phi35_tictactoe_dpo5epoch
tags:
- merge
- mergekit
- lazymergekit
- unsloth/Phi-3.5-mini-instruct
- ihughes15234/phi35_tictactoe_dpo5epoch
- text-generation-inference
---
# Phi_3_dpo_ttt_Merge
Phi_3_dpo_ttt_Merge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct)
* [ihughes15234/phi35_tictactoe_dpo5epoch](https://huggingface.co/ihughes15234/phi35_tictactoe_dpo5epoch)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: unsloth/Phi-3.5-mini-instruct
layer_range: [0, 32]
- model: ihughes15234/phi35_tictactoe_dpo5epoch
layer_range: [0, 32]
merge_method: slerp
base_model: unsloth/Phi-3.5-mini-instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ihughes15234/Phi_3_dpo_ttt_Merge"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf | RichardErkhov | 2024-10-26T17:55:10Z | 8 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T16:51:38Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit - GGUF
- Model creator: https://huggingface.co/terracall/
- Original model: https://huggingface.co/terracall/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q2_K.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q2_K.gguf) | Q2_K | 1.27GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q3_K.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q3_K.gguf) | Q3_K | 1.57GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_0.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_0.gguf) | Q4_0 | 1.79GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_K.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_K.gguf) | Q4_K | 1.88GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_1.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q4_1.gguf) | Q4_1 | 1.95GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_0.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_0.gguf) | Q5_0 | 2.11GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_K.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_K.gguf) | Q5_K | 2.16GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_1.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q5_1.gguf) | Q5_1 | 2.28GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q6_K.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q6_K.gguf) | Q6_K | 2.46GB |
| [TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q8_0.gguf](https://huggingface.co/RichardErkhov/terracall_-_TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit-gguf/blob/main/TerraCall-Llama-3.2-3B-Data-Extraction-V1-16bit.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** terracall
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Robzy/audiomamba | Robzy | 2024-10-26T17:37:58Z | 8 | 1 | null | [
"safetensors",
"mamba",
"audio-classification",
"pytorch",
"arxiv:2406.03344",
"license:mit",
"region:us"
] | audio-classification | 2024-10-26T14:43:25Z | ---
tags:
- audio-classification
- pytorch
license: mit
pipeline_tag: audio-classification
---
Model: Audio Mamba (AuM)
Variant: Base, Fo-Bi
Pretrained on AudioSet and VGGSoud.
Paper: https://arxiv.org/pdf/2406.03344 |
renix-codex/formal-lang-rxcx-model | renix-codex | 2024-10-26T17:27:27Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"text-generation",
"formal-language",
"grammar-correction",
"english",
"text-formalization",
"en",
"dataset:grammarly/coedit",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-26T11:03:20Z | ---
language: en
license: apache-2.0
library_name: transformers
pipeline_tag: text2text-generation
tags:
- text-generation
- formal-language
- grammar-correction
- t5
- english
- text-formalization
model-index:
- name: formal-lang-rxcx-model
results:
- task:
type: text2text-generation
name: formal language correction
metrics:
- type: loss
value: 2.1 # Replace with your actual training loss
name: training_loss
- type: rouge1
value: 0.85 # Replace with your actual ROUGE score
name: rouge1
- type: accuracy
value: 0.82 # Replace with your actual accuracy
name: accuracy
dataset:
name: grammarly/coedit
type: grammarly/coedit
split: train
datasets:
- grammarly/coedit
model-type: t5-base
inference: true
base_model: t5-base
widget:
- text: "make formal: hey whats up"
- text: "make formal: gonna be late for meeting"
- text: "make formal: this is kinda cool project"
extra_gated_prompt: This is a fine-tuned T5 model for converting informal text to formal language.
extra_gated_fields:
Company/Institution: text
Purpose: text
---
# Formal Language T5 Model
This model is fine-tuned from T5-base for formal language correction and text formalization.
## Model Description
- **Model Type:** T5-base fine-tuned
- **Language:** English
- **Task:** Text Formalization and Grammar Correction
- **License:** Apache 2.0
- **Base Model:** t5-base
## Intended Uses & Limitations
### Intended Uses
- Converting informal text to formal language
- Improving text professionalism
- Grammar correction
- Business communication enhancement
- Academic writing improvement
### Limitations
- Works best with English text
- Maximum input length: 128 tokens
- May not preserve specific domain terminology
- Best suited for business and academic contexts
## Usage
```python
from transformers import AutoModelForSeq2SeqGeneration, AutoTokenizer
model = AutoModelForSeq2SeqGeneration.from_pretrained("renix-codex/formal-lang-rxcx-model")
tokenizer = AutoTokenizer.from_pretrained("renix-codex/formal-lang-rxcx-model")
# Example usage
text = "make formal: hey whats up"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs)
formal_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Example Inputs and Outputs
| Informal Input | Formal Output |
|----------------|---------------|
| "hey whats up" | "Hello, how are you?" |
| "gonna be late for meeting" | "I will be late for the meeting." |
| "this is kinda cool" | "This is quite impressive." |
## Training
The model was trained on the Grammarly/COEDIT dataset with the following specifications:
- Base Model: T5-base
- Training Hardware: A100 GPU
- Sequence Length: 128 tokens
- Input Format: "make formal: [informal text]"
## License
Apache License 2.0
## Citation
```bibtex
@misc{formal-lang-rxcx-model,
author = {renix-codex},
title = {Formal Language T5 Model},
year = {2024},
publisher = {HuggingFace},
journal = {HuggingFace Model Hub},
url = {https://huggingface.co/renix-codex/formal-lang-rxcx-model}
}
```
## Developer
Model developed by renix-codex
## Ethical Considerations
This model is intended to assist in formal writing while maintaining the original meaning of the text. Users should be aware that:
- The model may alter the tone of personal or culturally specific expressions
- It should be used as a writing aid rather than a replacement for human judgment
- The output should be reviewed for accuracy and appropriateness
## Updates and Versions
Initial Release - February 2024
- Base implementation with T5-base
- Trained on Grammarly/COEDIT dataset
- Optimized for formal language conversion |
1-800-SHARED-TASKS/CHIPSAL-A-MuRIL-5e-5 | 1-800-SHARED-TASKS | 2024-10-26T17:18:03Z | 9 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"region:us"
] | text-classification | 2024-10-26T17:06:08Z | ---
tags:
- text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anthienlong/baonsfw | anthienlong | 2024-10-26T17:13:18Z | 54 | 2 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2024-10-24T06:22:12Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: nsfw girl in red
output:
url: images/0cb043d8-aa2e-4726-8cba-ebfa9607f323.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: nsfw
license: unknown
---
# Choiluon NSFW
<Gallery />
## Model description
etst NSFW
## Trigger words
You should use `nsfw` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/anthienlong/duma-choiluon/tree/main) them in the Files & versions tab.
|
AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF | AndreasX | 2024-10-26T17:06:47Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"llama-cpp",
"gguf-my-repo",
"es",
"en",
"base_model:jinaai/jina-embeddings-v2-base-es",
"base_model:quantized:jinaai/jina-embeddings-v2-base-es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | feature-extraction | 2024-10-26T17:06:44Z | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- llama-cpp
- gguf-my-repo
language:
- es
- en
inference: false
license: apache-2.0
base_model: jinaai/jina-embeddings-v2-base-es
model-index:
- name: jina-embeddings-v2-base-es
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.25373134328358
- type: ap
value: 37.05201236793268
- type: f1
value: 68.16770391201077
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 78.30885
- type: ap
value: 73.01622441156408
- type: f1
value: 78.20769284466313
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.324
- type: f1
value: 37.89543008761673
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.678000000000004
- type: f1
value: 38.122639506976
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.968999999999998
- type: map_at_10
value: 40.691
- type: map_at_100
value: 41.713
- type: map_at_1000
value: 41.719
- type: map_at_3
value: 35.42
- type: map_at_5
value: 38.442
- type: mrr_at_1
value: 24.395
- type: mrr_at_10
value: 40.853
- type: mrr_at_100
value: 41.869
- type: mrr_at_1000
value: 41.874
- type: mrr_at_3
value: 35.68
- type: mrr_at_5
value: 38.572
- type: ndcg_at_1
value: 23.968999999999998
- type: ndcg_at_10
value: 50.129999999999995
- type: ndcg_at_100
value: 54.364000000000004
- type: ndcg_at_1000
value: 54.494
- type: ndcg_at_3
value: 39.231
- type: ndcg_at_5
value: 44.694
- type: precision_at_1
value: 23.968999999999998
- type: precision_at_10
value: 8.036999999999999
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.761
- type: precision_at_5
value: 12.717
- type: recall_at_1
value: 23.968999999999998
- type: recall_at_10
value: 80.36999999999999
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 50.28399999999999
- type: recall_at_5
value: 63.585
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 41.54886683150053
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.186028697637234
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.19432643698725
- type: mrr
value: 75.28646176845622
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.3828259381228
- type: cos_sim_spearman
value: 83.04647058342209
- type: euclidean_pearson
value: 84.02895346096244
- type: euclidean_spearman
value: 82.34524978635342
- type: manhattan_pearson
value: 84.35030723233426
- type: manhattan_spearman
value: 83.17177464337936
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.25649350649351
- type: f1
value: 85.22320474023192
- task:
type: Clustering
dataset:
name: MTEB BigPatentClustering
type: jinaai/big-patent-clustering
config: default
split: test
revision: 62d5330920bca426ce9d3c76ea914f15fc83e891
metrics:
- type: v_measure
value: 20.42929408254094
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.165318177498136
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.89030154229562
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.119
- type: map_at_10
value: 42.092
- type: map_at_100
value: 43.506
- type: map_at_1000
value: 43.631
- type: map_at_3
value: 38.373000000000005
- type: map_at_5
value: 40.501
- type: mrr_at_1
value: 38.196999999999996
- type: mrr_at_10
value: 48.237
- type: mrr_at_100
value: 48.914
- type: mrr_at_1000
value: 48.959
- type: mrr_at_3
value: 45.279
- type: mrr_at_5
value: 47.11
- type: ndcg_at_1
value: 38.196999999999996
- type: ndcg_at_10
value: 48.849
- type: ndcg_at_100
value: 53.713
- type: ndcg_at_1000
value: 55.678000000000004
- type: ndcg_at_3
value: 43.546
- type: ndcg_at_5
value: 46.009
- type: precision_at_1
value: 38.196999999999996
- type: precision_at_10
value: 9.642000000000001
- type: precision_at_100
value: 1.5190000000000001
- type: precision_at_1000
value: 0.199
- type: precision_at_3
value: 21.65
- type: precision_at_5
value: 15.708
- type: recall_at_1
value: 30.119
- type: recall_at_10
value: 61.788
- type: recall_at_100
value: 82.14399999999999
- type: recall_at_1000
value: 95.003
- type: recall_at_3
value: 45.772
- type: recall_at_5
value: 53.04600000000001
- type: map_at_1
value: 28.979
- type: map_at_10
value: 37.785000000000004
- type: map_at_100
value: 38.945
- type: map_at_1000
value: 39.071
- type: map_at_3
value: 35.083999999999996
- type: map_at_5
value: 36.571999999999996
- type: mrr_at_1
value: 36.242000000000004
- type: mrr_at_10
value: 43.552
- type: mrr_at_100
value: 44.228
- type: mrr_at_1000
value: 44.275999999999996
- type: mrr_at_3
value: 41.359
- type: mrr_at_5
value: 42.598
- type: ndcg_at_1
value: 36.242000000000004
- type: ndcg_at_10
value: 42.94
- type: ndcg_at_100
value: 47.343
- type: ndcg_at_1000
value: 49.538
- type: ndcg_at_3
value: 39.086999999999996
- type: ndcg_at_5
value: 40.781
- type: precision_at_1
value: 36.242000000000004
- type: precision_at_10
value: 7.954999999999999
- type: precision_at_100
value: 1.303
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 18.556
- type: precision_at_5
value: 13.145999999999999
- type: recall_at_1
value: 28.979
- type: recall_at_10
value: 51.835
- type: recall_at_100
value: 70.47
- type: recall_at_1000
value: 84.68299999999999
- type: recall_at_3
value: 40.410000000000004
- type: recall_at_5
value: 45.189
- type: map_at_1
value: 37.878
- type: map_at_10
value: 49.903
- type: map_at_100
value: 50.797000000000004
- type: map_at_1000
value: 50.858000000000004
- type: map_at_3
value: 46.526
- type: map_at_5
value: 48.615
- type: mrr_at_1
value: 43.135
- type: mrr_at_10
value: 53.067
- type: mrr_at_100
value: 53.668000000000006
- type: mrr_at_1000
value: 53.698
- type: mrr_at_3
value: 50.449
- type: mrr_at_5
value: 52.117000000000004
- type: ndcg_at_1
value: 43.135
- type: ndcg_at_10
value: 55.641
- type: ndcg_at_100
value: 59.427
- type: ndcg_at_1000
value: 60.655
- type: ndcg_at_3
value: 49.969
- type: ndcg_at_5
value: 53.075
- type: precision_at_1
value: 43.135
- type: precision_at_10
value: 8.997
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 22.215
- type: precision_at_5
value: 15.586
- type: recall_at_1
value: 37.878
- type: recall_at_10
value: 69.405
- type: recall_at_100
value: 86.262
- type: recall_at_1000
value: 95.012
- type: recall_at_3
value: 54.458
- type: recall_at_5
value: 61.965
- type: map_at_1
value: 24.853
- type: map_at_10
value: 32.402
- type: map_at_100
value: 33.417
- type: map_at_1000
value: 33.498
- type: map_at_3
value: 30.024
- type: map_at_5
value: 31.407
- type: mrr_at_1
value: 26.667
- type: mrr_at_10
value: 34.399
- type: mrr_at_100
value: 35.284
- type: mrr_at_1000
value: 35.345
- type: mrr_at_3
value: 32.109
- type: mrr_at_5
value: 33.375
- type: ndcg_at_1
value: 26.667
- type: ndcg_at_10
value: 36.854
- type: ndcg_at_100
value: 42.196
- type: ndcg_at_1000
value: 44.303
- type: ndcg_at_3
value: 32.186
- type: ndcg_at_5
value: 34.512
- type: precision_at_1
value: 26.667
- type: precision_at_10
value: 5.559
- type: precision_at_100
value: 0.88
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.333
- type: precision_at_5
value: 9.379
- type: recall_at_1
value: 24.853
- type: recall_at_10
value: 48.636
- type: recall_at_100
value: 73.926
- type: recall_at_1000
value: 89.94
- type: recall_at_3
value: 36.266
- type: recall_at_5
value: 41.723
- type: map_at_1
value: 14.963999999999999
- type: map_at_10
value: 22.591
- type: map_at_100
value: 23.735999999999997
- type: map_at_1000
value: 23.868000000000002
- type: map_at_3
value: 20.093
- type: map_at_5
value: 21.499
- type: mrr_at_1
value: 18.407999999999998
- type: mrr_at_10
value: 26.863
- type: mrr_at_100
value: 27.87
- type: mrr_at_1000
value: 27.947
- type: mrr_at_3
value: 24.254
- type: mrr_at_5
value: 25.784000000000002
- type: ndcg_at_1
value: 18.407999999999998
- type: ndcg_at_10
value: 27.549
- type: ndcg_at_100
value: 33.188
- type: ndcg_at_1000
value: 36.312
- type: ndcg_at_3
value: 22.862
- type: ndcg_at_5
value: 25.130999999999997
- type: precision_at_1
value: 18.407999999999998
- type: precision_at_10
value: 5.087
- type: precision_at_100
value: 0.923
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.987
- type: precision_at_5
value: 8.209
- type: recall_at_1
value: 14.963999999999999
- type: recall_at_10
value: 38.673
- type: recall_at_100
value: 63.224999999999994
- type: recall_at_1000
value: 85.443
- type: recall_at_3
value: 25.840000000000003
- type: recall_at_5
value: 31.503999999999998
- type: map_at_1
value: 27.861000000000004
- type: map_at_10
value: 37.562
- type: map_at_100
value: 38.906
- type: map_at_1000
value: 39.021
- type: map_at_3
value: 34.743
- type: map_at_5
value: 36.168
- type: mrr_at_1
value: 34.455999999999996
- type: mrr_at_10
value: 43.428
- type: mrr_at_100
value: 44.228
- type: mrr_at_1000
value: 44.278
- type: mrr_at_3
value: 41.001
- type: mrr_at_5
value: 42.315000000000005
- type: ndcg_at_1
value: 34.455999999999996
- type: ndcg_at_10
value: 43.477
- type: ndcg_at_100
value: 48.953
- type: ndcg_at_1000
value: 51.19200000000001
- type: ndcg_at_3
value: 38.799
- type: ndcg_at_5
value: 40.743
- type: precision_at_1
value: 34.455999999999996
- type: precision_at_10
value: 7.902000000000001
- type: precision_at_100
value: 1.244
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 18.511
- type: precision_at_5
value: 12.859000000000002
- type: recall_at_1
value: 27.861000000000004
- type: recall_at_10
value: 55.36
- type: recall_at_100
value: 78.384
- type: recall_at_1000
value: 93.447
- type: recall_at_3
value: 41.926
- type: recall_at_5
value: 47.257
- type: map_at_1
value: 26.375
- type: map_at_10
value: 35.571000000000005
- type: map_at_100
value: 36.785000000000004
- type: map_at_1000
value: 36.905
- type: map_at_3
value: 32.49
- type: map_at_5
value: 34.123999999999995
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 40.598
- type: mrr_at_100
value: 41.484
- type: mrr_at_1000
value: 41.546
- type: mrr_at_3
value: 37.9
- type: mrr_at_5
value: 39.401
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 41.026
- type: ndcg_at_100
value: 46.365
- type: ndcg_at_1000
value: 48.876
- type: ndcg_at_3
value: 35.843
- type: ndcg_at_5
value: 38.118
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.443
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.819
- type: precision_at_5
value: 11.985999999999999
- type: recall_at_1
value: 26.375
- type: recall_at_10
value: 52.471000000000004
- type: recall_at_100
value: 75.354
- type: recall_at_1000
value: 92.35
- type: recall_at_3
value: 37.893
- type: recall_at_5
value: 43.935
- type: map_at_1
value: 25.012666666666668
- type: map_at_10
value: 33.685833333333335
- type: map_at_100
value: 34.849250000000005
- type: map_at_1000
value: 34.970083333333335
- type: map_at_3
value: 31.065083333333334
- type: map_at_5
value: 32.494416666666666
- type: mrr_at_1
value: 29.772666666666662
- type: mrr_at_10
value: 37.824666666666666
- type: mrr_at_100
value: 38.66741666666666
- type: mrr_at_1000
value: 38.72916666666666
- type: mrr_at_3
value: 35.54575
- type: mrr_at_5
value: 36.81524999999999
- type: ndcg_at_1
value: 29.772666666666662
- type: ndcg_at_10
value: 38.78241666666666
- type: ndcg_at_100
value: 43.84591666666667
- type: ndcg_at_1000
value: 46.275416666666665
- type: ndcg_at_3
value: 34.33416666666667
- type: ndcg_at_5
value: 36.345166666666664
- type: precision_at_1
value: 29.772666666666662
- type: precision_at_10
value: 6.794916666666667
- type: precision_at_100
value: 1.106416666666667
- type: precision_at_1000
value: 0.15033333333333335
- type: precision_at_3
value: 15.815083333333336
- type: precision_at_5
value: 11.184166666666664
- type: recall_at_1
value: 25.012666666666668
- type: recall_at_10
value: 49.748500000000014
- type: recall_at_100
value: 72.11341666666667
- type: recall_at_1000
value: 89.141
- type: recall_at_3
value: 37.242999999999995
- type: recall_at_5
value: 42.49033333333333
- type: map_at_1
value: 23.177
- type: map_at_10
value: 29.310000000000002
- type: map_at_100
value: 30.188
- type: map_at_1000
value: 30.29
- type: map_at_3
value: 27.356
- type: map_at_5
value: 28.410999999999998
- type: mrr_at_1
value: 26.074
- type: mrr_at_10
value: 32.002
- type: mrr_at_100
value: 32.838
- type: mrr_at_1000
value: 32.909
- type: mrr_at_3
value: 30.317
- type: mrr_at_5
value: 31.222
- type: ndcg_at_1
value: 26.074
- type: ndcg_at_10
value: 32.975
- type: ndcg_at_100
value: 37.621
- type: ndcg_at_1000
value: 40.253
- type: ndcg_at_3
value: 29.452
- type: ndcg_at_5
value: 31.020999999999997
- type: precision_at_1
value: 26.074
- type: precision_at_10
value: 5.077
- type: precision_at_100
value: 0.8049999999999999
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 12.526000000000002
- type: precision_at_5
value: 8.588999999999999
- type: recall_at_1
value: 23.177
- type: recall_at_10
value: 41.613
- type: recall_at_100
value: 63.287000000000006
- type: recall_at_1000
value: 83.013
- type: recall_at_3
value: 31.783
- type: recall_at_5
value: 35.769
- type: map_at_1
value: 15.856
- type: map_at_10
value: 22.651
- type: map_at_100
value: 23.649
- type: map_at_1000
value: 23.783
- type: map_at_3
value: 20.591
- type: map_at_5
value: 21.684
- type: mrr_at_1
value: 19.408
- type: mrr_at_10
value: 26.51
- type: mrr_at_100
value: 27.356
- type: mrr_at_1000
value: 27.439999999999998
- type: mrr_at_3
value: 24.547
- type: mrr_at_5
value: 25.562
- type: ndcg_at_1
value: 19.408
- type: ndcg_at_10
value: 27.072000000000003
- type: ndcg_at_100
value: 31.980999999999998
- type: ndcg_at_1000
value: 35.167
- type: ndcg_at_3
value: 23.338
- type: ndcg_at_5
value: 24.94
- type: precision_at_1
value: 19.408
- type: precision_at_10
value: 4.9590000000000005
- type: precision_at_100
value: 0.8710000000000001
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 11.138
- type: precision_at_5
value: 7.949000000000001
- type: recall_at_1
value: 15.856
- type: recall_at_10
value: 36.578
- type: recall_at_100
value: 58.89
- type: recall_at_1000
value: 81.743
- type: recall_at_3
value: 25.94
- type: recall_at_5
value: 30.153999999999996
- type: map_at_1
value: 25.892
- type: map_at_10
value: 33.899
- type: map_at_100
value: 34.955000000000005
- type: map_at_1000
value: 35.066
- type: map_at_3
value: 31.41
- type: map_at_5
value: 32.669
- type: mrr_at_1
value: 30.224
- type: mrr_at_10
value: 37.936
- type: mrr_at_100
value: 38.777
- type: mrr_at_1000
value: 38.85
- type: mrr_at_3
value: 35.821
- type: mrr_at_5
value: 36.894
- type: ndcg_at_1
value: 30.224
- type: ndcg_at_10
value: 38.766
- type: ndcg_at_100
value: 43.806
- type: ndcg_at_1000
value: 46.373999999999995
- type: ndcg_at_3
value: 34.325
- type: ndcg_at_5
value: 36.096000000000004
- type: precision_at_1
value: 30.224
- type: precision_at_10
value: 6.446000000000001
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 15.392
- type: precision_at_5
value: 10.671999999999999
- type: recall_at_1
value: 25.892
- type: recall_at_10
value: 49.573
- type: recall_at_100
value: 71.885
- type: recall_at_1000
value: 89.912
- type: recall_at_3
value: 37.226
- type: recall_at_5
value: 41.74
- type: map_at_1
value: 23.915
- type: map_at_10
value: 33.613
- type: map_at_100
value: 35.333999999999996
- type: map_at_1000
value: 35.563
- type: map_at_3
value: 31.203999999999997
- type: map_at_5
value: 32.479
- type: mrr_at_1
value: 29.447000000000003
- type: mrr_at_10
value: 38.440000000000005
- type: mrr_at_100
value: 39.459
- type: mrr_at_1000
value: 39.513999999999996
- type: mrr_at_3
value: 36.495
- type: mrr_at_5
value: 37.592
- type: ndcg_at_1
value: 29.447000000000003
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 45.382
- type: ndcg_at_1000
value: 47.921
- type: ndcg_at_3
value: 35.671
- type: ndcg_at_5
value: 37.299
- type: precision_at_1
value: 29.447000000000003
- type: precision_at_10
value: 7.648000000000001
- type: precision_at_100
value: 1.567
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 17.194000000000003
- type: precision_at_5
value: 12.253
- type: recall_at_1
value: 23.915
- type: recall_at_10
value: 49.491
- type: recall_at_100
value: 76.483
- type: recall_at_1000
value: 92.674
- type: recall_at_3
value: 38.878
- type: recall_at_5
value: 43.492
- type: map_at_1
value: 20.283
- type: map_at_10
value: 26.851000000000003
- type: map_at_100
value: 27.973
- type: map_at_1000
value: 28.087
- type: map_at_3
value: 24.887
- type: map_at_5
value: 25.804
- type: mrr_at_1
value: 22.366
- type: mrr_at_10
value: 28.864
- type: mrr_at_100
value: 29.903000000000002
- type: mrr_at_1000
value: 29.988
- type: mrr_at_3
value: 27.017999999999997
- type: mrr_at_5
value: 27.813
- type: ndcg_at_1
value: 22.366
- type: ndcg_at_10
value: 30.898999999999997
- type: ndcg_at_100
value: 36.176
- type: ndcg_at_1000
value: 39.036
- type: ndcg_at_3
value: 26.932000000000002
- type: ndcg_at_5
value: 28.416999999999998
- type: precision_at_1
value: 22.366
- type: precision_at_10
value: 4.824
- type: precision_at_100
value: 0.804
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 7.8740000000000006
- type: recall_at_1
value: 20.283
- type: recall_at_10
value: 41.559000000000005
- type: recall_at_100
value: 65.051
- type: recall_at_1000
value: 86.47200000000001
- type: recall_at_3
value: 30.524
- type: recall_at_5
value: 34.11
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.326
- type: map_at_10
value: 19.357
- type: map_at_100
value: 21.014
- type: map_at_1000
value: 21.188000000000002
- type: map_at_3
value: 16.305
- type: map_at_5
value: 17.886
- type: mrr_at_1
value: 24.820999999999998
- type: mrr_at_10
value: 36.150999999999996
- type: mrr_at_100
value: 37.080999999999996
- type: mrr_at_1000
value: 37.123
- type: mrr_at_3
value: 32.952999999999996
- type: mrr_at_5
value: 34.917
- type: ndcg_at_1
value: 24.820999999999998
- type: ndcg_at_10
value: 27.131
- type: ndcg_at_100
value: 33.841
- type: ndcg_at_1000
value: 37.159
- type: ndcg_at_3
value: 22.311
- type: ndcg_at_5
value: 24.026
- type: precision_at_1
value: 24.820999999999998
- type: precision_at_10
value: 8.450000000000001
- type: precision_at_100
value: 1.557
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 16.612
- type: precision_at_5
value: 12.808
- type: recall_at_1
value: 11.326
- type: recall_at_10
value: 32.548
- type: recall_at_100
value: 55.803000000000004
- type: recall_at_1000
value: 74.636
- type: recall_at_3
value: 20.549
- type: recall_at_5
value: 25.514
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.481
- type: map_at_10
value: 15.043999999999999
- type: map_at_100
value: 20.194000000000003
- type: map_at_1000
value: 21.423000000000002
- type: map_at_3
value: 11.238
- type: map_at_5
value: 12.828999999999999
- type: mrr_at_1
value: 54.50000000000001
- type: mrr_at_10
value: 64.713
- type: mrr_at_100
value: 65.216
- type: mrr_at_1000
value: 65.23
- type: mrr_at_3
value: 62.74999999999999
- type: mrr_at_5
value: 63.87500000000001
- type: ndcg_at_1
value: 43.375
- type: ndcg_at_10
value: 32.631
- type: ndcg_at_100
value: 36.338
- type: ndcg_at_1000
value: 43.541000000000004
- type: ndcg_at_3
value: 36.746
- type: ndcg_at_5
value: 34.419
- type: precision_at_1
value: 54.50000000000001
- type: precision_at_10
value: 24.825
- type: precision_at_100
value: 7.698
- type: precision_at_1000
value: 1.657
- type: precision_at_3
value: 38.917
- type: precision_at_5
value: 32.35
- type: recall_at_1
value: 7.481
- type: recall_at_10
value: 20.341
- type: recall_at_100
value: 41.778
- type: recall_at_1000
value: 64.82
- type: recall_at_3
value: 12.748000000000001
- type: recall_at_5
value: 15.507000000000001
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.580000000000005
- type: f1
value: 41.5149462395095
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 61.683
- type: map_at_10
value: 73.071
- type: map_at_100
value: 73.327
- type: map_at_1000
value: 73.341
- type: map_at_3
value: 71.446
- type: map_at_5
value: 72.557
- type: mrr_at_1
value: 66.44200000000001
- type: mrr_at_10
value: 77.725
- type: mrr_at_100
value: 77.89399999999999
- type: mrr_at_1000
value: 77.898
- type: mrr_at_3
value: 76.283
- type: mrr_at_5
value: 77.29700000000001
- type: ndcg_at_1
value: 66.44200000000001
- type: ndcg_at_10
value: 78.43
- type: ndcg_at_100
value: 79.462
- type: ndcg_at_1000
value: 79.754
- type: ndcg_at_3
value: 75.53800000000001
- type: ndcg_at_5
value: 77.332
- type: precision_at_1
value: 66.44200000000001
- type: precision_at_10
value: 9.878
- type: precision_at_100
value: 1.051
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 29.878
- type: precision_at_5
value: 18.953
- type: recall_at_1
value: 61.683
- type: recall_at_10
value: 90.259
- type: recall_at_100
value: 94.633
- type: recall_at_1000
value: 96.60499999999999
- type: recall_at_3
value: 82.502
- type: recall_at_5
value: 86.978
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.724
- type: map_at_10
value: 29.487999999999996
- type: map_at_100
value: 31.243
- type: map_at_1000
value: 31.419999999999998
- type: map_at_3
value: 25.612000000000002
- type: map_at_5
value: 27.859
- type: mrr_at_1
value: 35.802
- type: mrr_at_10
value: 44.684000000000005
- type: mrr_at_100
value: 45.578
- type: mrr_at_1000
value: 45.621
- type: mrr_at_3
value: 42.361
- type: mrr_at_5
value: 43.85
- type: ndcg_at_1
value: 35.802
- type: ndcg_at_10
value: 37.009
- type: ndcg_at_100
value: 43.903
- type: ndcg_at_1000
value: 47.019
- type: ndcg_at_3
value: 33.634
- type: ndcg_at_5
value: 34.965
- type: precision_at_1
value: 35.802
- type: precision_at_10
value: 10.386
- type: precision_at_100
value: 1.7309999999999999
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 22.84
- type: precision_at_5
value: 17.037
- type: recall_at_1
value: 17.724
- type: recall_at_10
value: 43.708000000000006
- type: recall_at_100
value: 69.902
- type: recall_at_1000
value: 88.51
- type: recall_at_3
value: 30.740000000000002
- type: recall_at_5
value: 36.742000000000004
- task:
type: Clustering
dataset:
name: MTEB FloresClusteringS2S
type: jinaai/flores_clustering
config: default
split: test
revision: 480b580487f53a46f881354a8348335d4edbb2de
metrics:
- type: v_measure
value: 39.79120149869612
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.801
- type: map_at_10
value: 50.42100000000001
- type: map_at_100
value: 51.254
- type: map_at_1000
value: 51.327999999999996
- type: map_at_3
value: 47.56
- type: map_at_5
value: 49.379
- type: mrr_at_1
value: 69.602
- type: mrr_at_10
value: 76.385
- type: mrr_at_100
value: 76.668
- type: mrr_at_1000
value: 76.683
- type: mrr_at_3
value: 75.102
- type: mrr_at_5
value: 75.949
- type: ndcg_at_1
value: 69.602
- type: ndcg_at_10
value: 59.476
- type: ndcg_at_100
value: 62.527
- type: ndcg_at_1000
value: 64.043
- type: ndcg_at_3
value: 55.155
- type: ndcg_at_5
value: 57.623000000000005
- type: precision_at_1
value: 69.602
- type: precision_at_10
value: 12.292
- type: precision_at_100
value: 1.467
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 34.634
- type: precision_at_5
value: 22.728
- type: recall_at_1
value: 34.801
- type: recall_at_10
value: 61.458
- type: recall_at_100
value: 73.363
- type: recall_at_1000
value: 83.43
- type: recall_at_3
value: 51.951
- type: recall_at_5
value: 56.82000000000001
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 67.46079999999999
- type: ap
value: 61.81278199159353
- type: f1
value: 67.26505019954826
- task:
type: Reranking
dataset:
name: MTEB MIRACL
type: jinaai/miracl
config: default
split: test
revision: d28a029f35c4ff7f616df47b0edf54e6882395e6
metrics:
- type: map
value: 73.90464144118539
- type: mrr
value: 82.44674693216022
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrieval
type: jinaai/miracl
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.299
- type: map_at_10
value: 70.547
- type: map_at_100
value: 72.394
- type: map_at_1000
value: 72.39999999999999
- type: map_at_3
value: 41.317
- type: map_at_5
value: 53.756
- type: mrr_at_1
value: 72.84
- type: mrr_at_10
value: 82.466
- type: mrr_at_100
value: 82.52199999999999
- type: mrr_at_1000
value: 82.52199999999999
- type: mrr_at_3
value: 80.607
- type: mrr_at_5
value: 82.065
- type: ndcg_at_1
value: 72.994
- type: ndcg_at_10
value: 80.89
- type: ndcg_at_100
value: 83.30199999999999
- type: ndcg_at_1000
value: 83.337
- type: ndcg_at_3
value: 70.357
- type: ndcg_at_5
value: 72.529
- type: precision_at_1
value: 72.994
- type: precision_at_10
value: 43.056
- type: precision_at_100
value: 4.603
- type: precision_at_1000
value: 0.461
- type: precision_at_3
value: 61.626000000000005
- type: precision_at_5
value: 55.525000000000006
- type: recall_at_1
value: 21.299
- type: recall_at_10
value: 93.903
- type: recall_at_100
value: 99.86699999999999
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 46.653
- type: recall_at_5
value: 65.72200000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.37163702690378
- type: f1
value: 90.18615216514222
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.88992661774515
- type: f1
value: 89.3738963046966
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.97218422252622
- type: f1
value: 54.03096570916335
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 68.75917278185457
- type: f1
value: 49.144083814705844
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.75991930060525
- type: f1
value: 69.37993796176502
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.93006052454606
- type: f1
value: 66.04029135274683
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.81977135171486
- type: f1
value: 74.10477122507747
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.23402824478816
- type: f1
value: 71.75572665880296
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.189750849969215
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.78357393555938
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.605612998328358
- type: mrr
value: 31.595529205695833
- task:
type: Retrieval
dataset:
name: MTEB MintakaESRetrieval
type: jinaai/mintakaqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.213
- type: map_at_10
value: 24.079
- type: map_at_100
value: 25.039
- type: map_at_1000
value: 25.142999999999997
- type: map_at_3
value: 21.823
- type: map_at_5
value: 23.069
- type: mrr_at_1
value: 16.213
- type: mrr_at_10
value: 24.079
- type: mrr_at_100
value: 25.039
- type: mrr_at_1000
value: 25.142999999999997
- type: mrr_at_3
value: 21.823
- type: mrr_at_5
value: 23.069
- type: ndcg_at_1
value: 16.213
- type: ndcg_at_10
value: 28.315
- type: ndcg_at_100
value: 33.475
- type: ndcg_at_1000
value: 36.838
- type: ndcg_at_3
value: 23.627000000000002
- type: ndcg_at_5
value: 25.879
- type: precision_at_1
value: 16.213
- type: precision_at_10
value: 4.183
- type: precision_at_100
value: 0.6709999999999999
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 9.612
- type: precision_at_5
value: 6.865
- type: recall_at_1
value: 16.213
- type: recall_at_10
value: 41.832
- type: recall_at_100
value: 67.12
- type: recall_at_1000
value: 94.843
- type: recall_at_3
value: 28.837000000000003
- type: recall_at_5
value: 34.323
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.692
- type: map_at_10
value: 10.783
- type: map_at_100
value: 13.447999999999999
- type: map_at_1000
value: 14.756
- type: map_at_3
value: 7.646
- type: map_at_5
value: 9.311
- type: mrr_at_1
value: 42.415000000000006
- type: mrr_at_10
value: 50.471
- type: mrr_at_100
value: 51.251999999999995
- type: mrr_at_1000
value: 51.292
- type: mrr_at_3
value: 48.4
- type: mrr_at_5
value: 49.809
- type: ndcg_at_1
value: 40.867
- type: ndcg_at_10
value: 30.303
- type: ndcg_at_100
value: 27.915
- type: ndcg_at_1000
value: 36.734
- type: ndcg_at_3
value: 35.74
- type: ndcg_at_5
value: 33.938
- type: precision_at_1
value: 42.415000000000006
- type: precision_at_10
value: 22.105
- type: precision_at_100
value: 7.173
- type: precision_at_1000
value: 2.007
- type: precision_at_3
value: 33.437
- type: precision_at_5
value: 29.349999999999998
- type: recall_at_1
value: 4.692
- type: recall_at_10
value: 14.798
- type: recall_at_100
value: 28.948
- type: recall_at_1000
value: 59.939
- type: recall_at_3
value: 8.562
- type: recall_at_5
value: 11.818
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.572999999999997
- type: map_at_10
value: 42.754
- type: map_at_100
value: 43.8
- type: map_at_1000
value: 43.838
- type: map_at_3
value: 38.157000000000004
- type: map_at_5
value: 40.9
- type: mrr_at_1
value: 31.373
- type: mrr_at_10
value: 45.321
- type: mrr_at_100
value: 46.109
- type: mrr_at_1000
value: 46.135
- type: mrr_at_3
value: 41.483
- type: mrr_at_5
value: 43.76
- type: ndcg_at_1
value: 31.373
- type: ndcg_at_10
value: 50.7
- type: ndcg_at_100
value: 55.103
- type: ndcg_at_1000
value: 55.955999999999996
- type: ndcg_at_3
value: 42.069
- type: ndcg_at_5
value: 46.595
- type: precision_at_1
value: 31.373
- type: precision_at_10
value: 8.601
- type: precision_at_100
value: 1.11
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 19.399
- type: precision_at_5
value: 14.224
- type: recall_at_1
value: 27.572999999999997
- type: recall_at_10
value: 72.465
- type: recall_at_100
value: 91.474
- type: recall_at_1000
value: 97.78099999999999
- type: recall_at_3
value: 50.087
- type: recall_at_5
value: 60.516000000000005
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.525
- type: map_at_10
value: 84.417
- type: map_at_100
value: 85.07000000000001
- type: map_at_1000
value: 85.085
- type: map_at_3
value: 81.45
- type: map_at_5
value: 83.317
- type: mrr_at_1
value: 81.17999999999999
- type: mrr_at_10
value: 87.34100000000001
- type: mrr_at_100
value: 87.461
- type: mrr_at_1000
value: 87.46199999999999
- type: mrr_at_3
value: 86.372
- type: mrr_at_5
value: 87.046
- type: ndcg_at_1
value: 81.17999999999999
- type: ndcg_at_10
value: 88.144
- type: ndcg_at_100
value: 89.424
- type: ndcg_at_1000
value: 89.517
- type: ndcg_at_3
value: 85.282
- type: ndcg_at_5
value: 86.874
- type: precision_at_1
value: 81.17999999999999
- type: precision_at_10
value: 13.385
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.29
- type: precision_at_5
value: 24.546
- type: recall_at_1
value: 70.525
- type: recall_at_10
value: 95.22500000000001
- type: recall_at_100
value: 99.572
- type: recall_at_1000
value: 99.98899999999999
- type: recall_at_3
value: 87.035
- type: recall_at_5
value: 91.526
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 48.284384328108736
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 56.02508021518392
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.023000000000001
- type: map_at_10
value: 10.046
- type: map_at_100
value: 11.802999999999999
- type: map_at_1000
value: 12.074
- type: map_at_3
value: 7.071
- type: map_at_5
value: 8.556
- type: mrr_at_1
value: 19.8
- type: mrr_at_10
value: 30.105999999999998
- type: mrr_at_100
value: 31.16
- type: mrr_at_1000
value: 31.224
- type: mrr_at_3
value: 26.633000000000003
- type: mrr_at_5
value: 28.768
- type: ndcg_at_1
value: 19.8
- type: ndcg_at_10
value: 17.358
- type: ndcg_at_100
value: 24.566
- type: ndcg_at_1000
value: 29.653000000000002
- type: ndcg_at_3
value: 16.052
- type: ndcg_at_5
value: 14.325
- type: precision_at_1
value: 19.8
- type: precision_at_10
value: 9.07
- type: precision_at_100
value: 1.955
- type: precision_at_1000
value: 0.318
- type: precision_at_3
value: 14.933
- type: precision_at_5
value: 12.68
- type: recall_at_1
value: 4.023000000000001
- type: recall_at_10
value: 18.398
- type: recall_at_100
value: 39.683
- type: recall_at_1000
value: 64.625
- type: recall_at_3
value: 9.113
- type: recall_at_5
value: 12.873000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 87.90508618312852
- type: cos_sim_spearman
value: 83.01323463129205
- type: euclidean_pearson
value: 84.35845059002891
- type: euclidean_spearman
value: 82.85508559018527
- type: manhattan_pearson
value: 84.3682368950498
- type: manhattan_spearman
value: 82.8619728517302
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 89.28294535873366
- type: cos_sim_spearman
value: 81.61879268131732
- type: euclidean_pearson
value: 85.99053604863724
- type: euclidean_spearman
value: 80.95176684739084
- type: manhattan_pearson
value: 85.98054086663903
- type: manhattan_spearman
value: 80.9911070430335
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.15898098455258
- type: cos_sim_spearman
value: 86.8247985072307
- type: euclidean_pearson
value: 86.25342429918649
- type: euclidean_spearman
value: 87.13468603023252
- type: manhattan_pearson
value: 86.2006134067688
- type: manhattan_spearman
value: 87.06135811996896
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.57403998481877
- type: cos_sim_spearman
value: 83.55947075172618
- type: euclidean_pearson
value: 84.97097562965358
- type: euclidean_spearman
value: 83.6287075601467
- type: manhattan_pearson
value: 84.87092197104133
- type: manhattan_spearman
value: 83.53783891641335
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.14632780204231
- type: cos_sim_spearman
value: 88.74903634923868
- type: euclidean_pearson
value: 88.03922995855112
- type: euclidean_spearman
value: 88.72852190525855
- type: manhattan_pearson
value: 87.9694791024271
- type: manhattan_spearman
value: 88.66461452107418
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.75989818558652
- type: cos_sim_spearman
value: 86.03107893122942
- type: euclidean_pearson
value: 85.21908960133018
- type: euclidean_spearman
value: 85.93012720153482
- type: manhattan_pearson
value: 85.1969170195502
- type: manhattan_spearman
value: 85.8975254197784
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.16803898789955
- type: cos_sim_spearman
value: 88.56139047950525
- type: euclidean_pearson
value: 88.09685325747859
- type: euclidean_spearman
value: 88.0457609458947
- type: manhattan_pearson
value: 88.07054413001431
- type: manhattan_spearman
value: 88.10784098889314
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.7160384474547
- type: cos_sim_spearman
value: 86.4899235500562
- type: euclidean_pearson
value: 85.90854477703468
- type: euclidean_spearman
value: 86.16085009124498
- type: manhattan_pearson
value: 85.9249735317884
- type: manhattan_spearman
value: 86.25038421339116
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.37914622360788
- type: cos_sim_spearman
value: 88.24619159322809
- type: euclidean_pearson
value: 89.00538382632769
- type: euclidean_spearman
value: 88.44675863524736
- type: manhattan_pearson
value: 88.97372120683606
- type: manhattan_spearman
value: 88.33509324222129
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.22181360203069
- type: cos_sim_spearman
value: 65.6218291833768
- type: euclidean_pearson
value: 67.14543788822508
- type: euclidean_spearman
value: 65.21269939987857
- type: manhattan_pearson
value: 67.03304607195636
- type: manhattan_spearman
value: 65.18885316423805
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 65.71694059677084
- type: cos_sim_spearman
value: 67.96591844540954
- type: euclidean_pearson
value: 65.6964079162296
- type: euclidean_spearman
value: 67.53027948900173
- type: manhattan_pearson
value: 65.93545097673741
- type: manhattan_spearman
value: 67.7261811805062
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 75.43544796375058
- type: cos_sim_spearman
value: 78.80462701160789
- type: euclidean_pearson
value: 76.19135575163138
- type: euclidean_spearman
value: 78.4974732597096
- type: manhattan_pearson
value: 76.3254742699264
- type: manhattan_spearman
value: 78.51884307690416
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.46805293607684
- type: cos_sim_spearman
value: 87.83792784689113
- type: euclidean_pearson
value: 87.3872143683234
- type: euclidean_spearman
value: 87.61611384542778
- type: manhattan_pearson
value: 87.38542672601992
- type: manhattan_spearman
value: 87.61423971087297
- task:
type: STS
dataset:
name: MTEB STSES
type: PlanTL-GOB-ES/sts-es
config: default
split: test
revision: 0912bb6c9393c76d62a7c5ee81c4c817ff47c9f4
metrics:
- type: cos_sim_pearson
value: 82.55286866116202
- type: cos_sim_spearman
value: 80.22150503320272
- type: euclidean_pearson
value: 83.27223445187087
- type: euclidean_spearman
value: 80.59078590992925
- type: manhattan_pearson
value: 83.23095887013197
- type: manhattan_spearman
value: 80.87994285189795
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.29717302265792
- type: mrr
value: 94.02156304117088
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 49.9
- type: map_at_10
value: 58.626
- type: map_at_100
value: 59.519999999999996
- type: map_at_1000
value: 59.55200000000001
- type: map_at_3
value: 56.232000000000006
- type: map_at_5
value: 57.833
- type: mrr_at_1
value: 52.333
- type: mrr_at_10
value: 60.039
- type: mrr_at_100
value: 60.732
- type: mrr_at_1000
value: 60.75899999999999
- type: mrr_at_3
value: 58.278
- type: mrr_at_5
value: 59.428000000000004
- type: ndcg_at_1
value: 52.333
- type: ndcg_at_10
value: 62.67
- type: ndcg_at_100
value: 66.465
- type: ndcg_at_1000
value: 67.425
- type: ndcg_at_3
value: 58.711999999999996
- type: ndcg_at_5
value: 60.958999999999996
- type: precision_at_1
value: 52.333
- type: precision_at_10
value: 8.333
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 22.778000000000002
- type: precision_at_5
value: 15.267
- type: recall_at_1
value: 49.9
- type: recall_at_10
value: 73.394
- type: recall_at_100
value: 90.43299999999999
- type: recall_at_1000
value: 98.167
- type: recall_at_3
value: 63.032999999999994
- type: recall_at_5
value: 68.444
- task:
type: Clustering
dataset:
name: MTEB SpanishNewsClusteringP2P
type: jinaai/spanish_news_clustering
config: default
split: test
revision: b5edc3d3d7c12c7b9f883e9da50f6732f3624142
metrics:
- type: v_measure
value: 48.30543557796266
- task:
type: Retrieval
dataset:
name: MTEB SpanishPassageRetrievalS2P
type: jinaai/spanish_passage_retrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.443
- type: map_at_10
value: 28.736
- type: map_at_100
value: 34.514
- type: map_at_1000
value: 35.004000000000005
- type: map_at_3
value: 20.308
- type: map_at_5
value: 25.404
- type: mrr_at_1
value: 50.29900000000001
- type: mrr_at_10
value: 63.757
- type: mrr_at_100
value: 64.238
- type: mrr_at_1000
value: 64.24600000000001
- type: mrr_at_3
value: 59.480999999999995
- type: mrr_at_5
value: 62.924
- type: ndcg_at_1
value: 50.29900000000001
- type: ndcg_at_10
value: 42.126999999999995
- type: ndcg_at_100
value: 57.208000000000006
- type: ndcg_at_1000
value: 60.646
- type: ndcg_at_3
value: 38.722
- type: ndcg_at_5
value: 40.007999999999996
- type: precision_at_1
value: 50.29900000000001
- type: precision_at_10
value: 19.82
- type: precision_at_100
value: 4.82
- type: precision_at_1000
value: 0.5910000000000001
- type: precision_at_3
value: 31.537
- type: precision_at_5
value: 28.262999999999998
- type: recall_at_1
value: 14.443
- type: recall_at_10
value: 43.885999999999996
- type: recall_at_100
value: 85.231
- type: recall_at_1000
value: 99.07000000000001
- type: recall_at_3
value: 22.486
- type: recall_at_5
value: 33.035
- type: map_at_1
value: 15.578
- type: map_at_10
value: 52.214000000000006
- type: map_at_100
value: 64.791
- type: map_at_1000
value: 64.791
- type: map_at_3
value: 33.396
- type: map_at_5
value: 41.728
- type: mrr_at_1
value: 73.653
- type: mrr_at_10
value: 85.116
- type: mrr_at_100
value: 85.205
- type: mrr_at_1000
value: 85.205
- type: mrr_at_3
value: 84.631
- type: mrr_at_5
value: 85.05
- type: ndcg_at_1
value: 76.64699999999999
- type: ndcg_at_10
value: 70.38600000000001
- type: ndcg_at_100
value: 82.27600000000001
- type: ndcg_at_1000
value: 82.27600000000001
- type: ndcg_at_3
value: 70.422
- type: ndcg_at_5
value: 69.545
- type: precision_at_1
value: 76.64699999999999
- type: precision_at_10
value: 43.653
- type: precision_at_100
value: 7.718999999999999
- type: precision_at_1000
value: 0.772
- type: precision_at_3
value: 64.671
- type: precision_at_5
value: 56.766000000000005
- type: recall_at_1
value: 15.578
- type: recall_at_10
value: 67.459
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 36.922
- type: recall_at_5
value: 49.424
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81683168316832
- type: cos_sim_ap
value: 95.61502659412484
- type: cos_sim_f1
value: 90.6813627254509
- type: cos_sim_precision
value: 90.86345381526104
- type: cos_sim_recall
value: 90.5
- type: dot_accuracy
value: 99.8039603960396
- type: dot_ap
value: 95.36783483182609
- type: dot_f1
value: 89.90825688073394
- type: dot_precision
value: 91.68399168399168
- type: dot_recall
value: 88.2
- type: euclidean_accuracy
value: 99.81188118811882
- type: euclidean_ap
value: 95.51583052324564
- type: euclidean_f1
value: 90.46214355948868
- type: euclidean_precision
value: 88.97485493230174
- type: euclidean_recall
value: 92.0
- type: manhattan_accuracy
value: 99.8079207920792
- type: manhattan_ap
value: 95.44030644653718
- type: manhattan_f1
value: 90.37698412698413
- type: manhattan_precision
value: 89.66535433070865
- type: manhattan_recall
value: 91.10000000000001
- type: max_accuracy
value: 99.81683168316832
- type: max_ap
value: 95.61502659412484
- type: max_f1
value: 90.6813627254509
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.39046705023096
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.57429225651293
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.17622570658746
- type: mrr
value: 50.99844293778118
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.97416289382191
- type: cos_sim_spearman
value: 29.871890597161432
- type: dot_pearson
value: 28.768845892613644
- type: dot_spearman
value: 28.872458999448686
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22599999999999998
- type: map_at_10
value: 1.646
- type: map_at_100
value: 9.491
- type: map_at_1000
value: 23.75
- type: map_at_3
value: 0.588
- type: map_at_5
value: 0.9129999999999999
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 89.889
- type: mrr_at_100
value: 89.889
- type: mrr_at_1000
value: 89.889
- type: mrr_at_3
value: 89.667
- type: mrr_at_5
value: 89.667
- type: ndcg_at_1
value: 75.0
- type: ndcg_at_10
value: 67.368
- type: ndcg_at_100
value: 52.834
- type: ndcg_at_1000
value: 49.144
- type: ndcg_at_3
value: 72.866
- type: ndcg_at_5
value: 70.16
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 71.8
- type: precision_at_100
value: 54.04
- type: precision_at_1000
value: 21.709999999999997
- type: precision_at_3
value: 77.333
- type: precision_at_5
value: 74.0
- type: recall_at_1
value: 0.22599999999999998
- type: recall_at_10
value: 1.9029999999999998
- type: recall_at_100
value: 13.012
- type: recall_at_1000
value: 46.105000000000004
- type: recall_at_3
value: 0.63
- type: recall_at_5
value: 1.0030000000000001
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.5
- type: map_at_10
value: 8.193999999999999
- type: map_at_100
value: 14.01
- type: map_at_1000
value: 15.570999999999998
- type: map_at_3
value: 4.361000000000001
- type: map_at_5
value: 5.9270000000000005
- type: mrr_at_1
value: 16.326999999999998
- type: mrr_at_10
value: 33.326
- type: mrr_at_100
value: 34.592
- type: mrr_at_1000
value: 34.592
- type: mrr_at_3
value: 29.252
- type: mrr_at_5
value: 30.680000000000003
- type: ndcg_at_1
value: 15.306000000000001
- type: ndcg_at_10
value: 19.819
- type: ndcg_at_100
value: 33.428000000000004
- type: ndcg_at_1000
value: 45.024
- type: ndcg_at_3
value: 19.667
- type: ndcg_at_5
value: 19.625
- type: precision_at_1
value: 16.326999999999998
- type: precision_at_10
value: 18.367
- type: precision_at_100
value: 7.367
- type: precision_at_1000
value: 1.496
- type: precision_at_3
value: 23.128999999999998
- type: precision_at_5
value: 21.633
- type: recall_at_1
value: 1.5
- type: recall_at_10
value: 14.362
- type: recall_at_100
value: 45.842
- type: recall_at_1000
value: 80.42
- type: recall_at_3
value: 5.99
- type: recall_at_5
value: 8.701
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.04740000000001
- type: ap
value: 13.58661943759992
- type: f1
value: 53.727487131754195
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.06395019807584
- type: f1
value: 61.36753664680866
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.19881263066229
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.19401561661799
- type: cos_sim_ap
value: 71.62462506173092
- type: cos_sim_f1
value: 66.0641327225455
- type: cos_sim_precision
value: 62.234662934453
- type: cos_sim_recall
value: 70.3957783641161
- type: dot_accuracy
value: 84.69333015437802
- type: dot_ap
value: 69.83805526490895
- type: dot_f1
value: 64.85446235265817
- type: dot_precision
value: 59.59328028293546
- type: dot_recall
value: 71.13456464379946
- type: euclidean_accuracy
value: 85.38475293556655
- type: euclidean_ap
value: 72.05594596250286
- type: euclidean_f1
value: 66.53543307086615
- type: euclidean_precision
value: 62.332872291378514
- type: euclidean_recall
value: 71.34564643799473
- type: manhattan_accuracy
value: 85.3907134767837
- type: manhattan_ap
value: 72.04585410650152
- type: manhattan_f1
value: 66.57132642116554
- type: manhattan_precision
value: 60.704194740273856
- type: manhattan_recall
value: 73.6939313984169
- type: max_accuracy
value: 85.3907134767837
- type: max_ap
value: 72.05594596250286
- type: max_f1
value: 66.57132642116554
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.30414871735165
- type: cos_sim_ap
value: 86.4398673359918
- type: cos_sim_f1
value: 78.9243598692186
- type: cos_sim_precision
value: 75.47249350101876
- type: cos_sim_recall
value: 82.7071142593163
- type: dot_accuracy
value: 89.26145845461248
- type: dot_ap
value: 86.32172118414802
- type: dot_f1
value: 78.8277467755645
- type: dot_precision
value: 75.79418662497335
- type: dot_recall
value: 82.11425931629196
- type: euclidean_accuracy
value: 89.24205378973105
- type: euclidean_ap
value: 86.23988673522649
- type: euclidean_f1
value: 78.67984857951413
- type: euclidean_precision
value: 75.2689684269742
- type: euclidean_recall
value: 82.41453649522637
- type: manhattan_accuracy
value: 89.18189932859859
- type: manhattan_ap
value: 86.21003833972824
- type: manhattan_f1
value: 78.70972564850115
- type: manhattan_precision
value: 76.485544094145
- type: manhattan_recall
value: 81.0671388974438
- type: max_accuracy
value: 89.30414871735165
- type: max_ap
value: 86.4398673359918
- type: max_f1
value: 78.9243598692186
- task:
type: Clustering
dataset:
name: MTEB WikiCitiesClustering
type: jinaai/cities_wiki_clustering
config: default
split: test
revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa
metrics:
- type: v_measure
value: 73.254610626148
- task:
type: Retrieval
dataset:
name: MTEB XMarketES
type: jinaai/xmarket_ml
config: default
split: test
revision: 705db869e8107dfe6e34b832af90446e77d813e3
metrics:
- type: map_at_1
value: 5.506
- type: map_at_10
value: 11.546
- type: map_at_100
value: 14.299999999999999
- type: map_at_1000
value: 15.146999999999998
- type: map_at_3
value: 8.748000000000001
- type: map_at_5
value: 10.036000000000001
- type: mrr_at_1
value: 17.902
- type: mrr_at_10
value: 25.698999999999998
- type: mrr_at_100
value: 26.634
- type: mrr_at_1000
value: 26.704
- type: mrr_at_3
value: 23.244999999999997
- type: mrr_at_5
value: 24.555
- type: ndcg_at_1
value: 17.902
- type: ndcg_at_10
value: 19.714000000000002
- type: ndcg_at_100
value: 25.363000000000003
- type: ndcg_at_1000
value: 30.903999999999996
- type: ndcg_at_3
value: 17.884
- type: ndcg_at_5
value: 18.462
- type: precision_at_1
value: 17.902
- type: precision_at_10
value: 10.467
- type: precision_at_100
value: 3.9699999999999998
- type: precision_at_1000
value: 1.1320000000000001
- type: precision_at_3
value: 14.387
- type: precision_at_5
value: 12.727
- type: recall_at_1
value: 5.506
- type: recall_at_10
value: 19.997999999999998
- type: recall_at_100
value: 42.947
- type: recall_at_1000
value: 67.333
- type: recall_at_3
value: 11.158
- type: recall_at_5
value: 14.577000000000002
- task:
type: Retrieval
dataset:
name: MTEB XPQAESRetrieval
type: jinaai/xpqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.53
- type: map_at_10
value: 58.68600000000001
- type: map_at_100
value: 60.45399999999999
- type: map_at_1000
value: 60.51499999999999
- type: map_at_3
value: 50.356
- type: map_at_5
value: 55.98
- type: mrr_at_1
value: 61.791
- type: mrr_at_10
value: 68.952
- type: mrr_at_100
value: 69.524
- type: mrr_at_1000
value: 69.538
- type: mrr_at_3
value: 67.087
- type: mrr_at_5
value: 68.052
- type: ndcg_at_1
value: 61.791
- type: ndcg_at_10
value: 65.359
- type: ndcg_at_100
value: 70.95700000000001
- type: ndcg_at_1000
value: 71.881
- type: ndcg_at_3
value: 59.999
- type: ndcg_at_5
value: 61.316
- type: precision_at_1
value: 61.791
- type: precision_at_10
value: 18.184
- type: precision_at_100
value: 2.317
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 42.203
- type: precision_at_5
value: 31.374999999999996
- type: recall_at_1
value: 32.53
- type: recall_at_10
value: 73.098
- type: recall_at_100
value: 94.029
- type: recall_at_1000
value: 99.842
- type: recall_at_3
value: 54.525
- type: recall_at_5
value: 63.796
---
# AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF
This model was converted to GGUF format from [`jinaai/jina-embeddings-v2-base-es`](https://huggingface.co/jinaai/jina-embeddings-v2-base-es) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jinaai/jina-embeddings-v2-base-es) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF --hf-file jina-embeddings-v2-base-es-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF --hf-file jina-embeddings-v2-base-es-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF --hf-file jina-embeddings-v2-base-es-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo AndreasX/jina-embeddings-v2-base-es-Q2_K-GGUF --hf-file jina-embeddings-v2-base-es-q2_k.gguf -c 2048
```
|
phxia/ann | phxia | 2024-10-26T17:04:51Z | 7 | 0 | pxia | [
"pxia",
"safetensors",
"ann",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2024-10-26T17:04:49Z | ---
library_name: pxia
tags:
- ann
- model_hub_mixin
- pxia
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration.
Library: [pxia](https://github.com/not-lain/pxia)
## how to load
```
pip install pxia
```
use the AutoModel class
```python
from pxia AutoModel
model = AutoModel.from_pretrained("phxia/ann")
```
or you can use the model class directly
```python
from pxia import ANN
model = ANN.from_pretrained("phxia/ann")
```
## Contributions
Any contributions are welcome at https://github.com/not-lain/pxia.
<img src="https://huggingface.co/spaces/phxia/README/resolve/main/logo.png"/>
|
anthienlong/replica | anthienlong | 2024-10-26T16:51:41Z | 17 | 3 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2024-10-22T08:49:35Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: ' beautiful woman in Superman suit'
output:
url: >-
images/0a921ee5b7ff474298c27a21680a7e78fbf5a867e733e50d5d6f5fdccb714b9d.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: replica
license: unknown
---
# Replica Midjourney
<Gallery />
## Model description
Mimic Replica Midjourney from shakker
## Trigger words
You should use `replica` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/anthienlong/Replica-Midjourney/tree/main) them in the Files & versions tab.
|
mav23/MathCoder2-Llama-3-8B-GGUF | mav23 | 2024-10-26T16:46:41Z | 97 | 0 | null | [
"gguf",
"math",
"text-generation",
"en",
"dataset:MathGenie/MathCode-Pile",
"arxiv:2410.08196",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-26T15:43:23Z | ---
license: apache-2.0
datasets:
- MathGenie/MathCode-Pile
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Meta-Llama-3-8B
pipeline_tag: text-generation
tags:
- math
---
# MathCoder2
### Introduction
The MathCoder2 models are created by conducting continued pretraining on [MathCode-Pile](https://huggingface.co/datasets/MathGenie/MathCode-Pile). They are introduced in the paper [MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code](https://arxiv.org/abs/2410.08196).
The mathematical pretraining dataset includes mathematical code accompanied with natural language reasoning steps, making it a superior resource for models aimed at performing advanced mathematical reasoning tasks.
### Evaluation

### Citation
If you find this repository helpful, please consider citing our papers:
```
@misc{lu2024mathcoder2bettermathreasoning,
title={MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code},
author={Zimu Lu and Aojun Zhou and Ke Wang and Houxing Ren and Weikang Shi and Junting Pan and Mingjie Zhan and Hongsheng Li},
year={2024},
eprint={2410.08196},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.08196},
}
```
```
@inproceedings{
wang2024mathcoder,
title={MathCoder: Seamless Code Integration in {LLM}s for Enhanced Mathematical Reasoning},
author={Zimu Lu and Aojun Zhou and Zimu Lu and Sichun Luo and Weikang Shi and Renrui Zhang and Linqi Song and Mingjie Zhan and Hongsheng Li},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=z8TW0ttBPp}
}
``` |
RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf | RichardErkhov | 2024-10-26T16:46:24Z | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-26T15:24:34Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-3b-1.58bit - GGUF
- Model creator: https://huggingface.co/liminerity/
- Original model: https://huggingface.co/liminerity/llama3-3b-1.58bit/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-3b-1.58bit.Q2_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q2_K.gguf) | Q2_K | 1.43GB |
| [llama3-3b-1.58bit.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q3_K_S.gguf) | Q3_K_S | 1.66GB |
| [llama3-3b-1.58bit.Q3_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q3_K.gguf) | Q3_K | 1.81GB |
| [llama3-3b-1.58bit.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q3_K_M.gguf) | Q3_K_M | 1.81GB |
| [llama3-3b-1.58bit.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q3_K_L.gguf) | Q3_K_L | 1.95GB |
| [llama3-3b-1.58bit.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.IQ4_XS.gguf) | IQ4_XS | 2.02GB |
| [llama3-3b-1.58bit.Q4_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q4_0.gguf) | Q4_0 | 2.1GB |
| [llama3-3b-1.58bit.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.IQ4_NL.gguf) | IQ4_NL | 2.12GB |
| [llama3-3b-1.58bit.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q4_K_S.gguf) | Q4_K_S | 2.12GB |
| [llama3-3b-1.58bit.Q4_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q4_K.gguf) | Q4_K | 2.23GB |
| [llama3-3b-1.58bit.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [llama3-3b-1.58bit.Q4_1.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q4_1.gguf) | Q4_1 | 2.31GB |
| [llama3-3b-1.58bit.Q5_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q5_0.gguf) | Q5_0 | 2.53GB |
| [llama3-3b-1.58bit.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q5_K_S.gguf) | Q5_K_S | 2.53GB |
| [llama3-3b-1.58bit.Q5_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q5_K.gguf) | Q5_K | 2.59GB |
| [llama3-3b-1.58bit.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q5_K_M.gguf) | Q5_K_M | 2.59GB |
| [llama3-3b-1.58bit.Q5_1.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q5_1.gguf) | Q5_1 | 2.74GB |
| [llama3-3b-1.58bit.Q6_K.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q6_K.gguf) | Q6_K | 2.97GB |
| [llama3-3b-1.58bit.Q8_0.gguf](https://huggingface.co/RichardErkhov/liminerity_-_llama3-3b-1.58bit-gguf/blob/main/llama3-3b-1.58bit.Q8_0.gguf) | Q8_0 | 3.85GB |
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: liminerity/llama3-1bit2
---
NOT GOOD BUT... I finally converted llama 3 to bitnet
# Uploaded model
- **Developed by:** liminerity
- **License:** apache-2.0
- **Finetuned from model :** liminerity/llama3-1bit2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
barc0/Llama-3.1-ARC-Heavy-Transduction-8B | barc0 | 2024-10-26T16:38:23Z | 46 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:barc0/transduction_heavy_100k_jsonl",
"dataset:barc0/transduction_heavy_suggestfunction_100k_jsonl",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T02:55:51Z | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- barc0/transduction_heavy_100k_jsonl
- barc0/transduction_heavy_suggestfunction_100k_jsonl
model-index:
- name: heavy-barc-llama3.1-8b-ins-fft-transduction_lr1e-5_epoch3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# heavy-barc-llama3.1-8b-ins-fft-transduction_lr1e-5_epoch3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the barc0/transduction_heavy_100k_jsonl and the barc0/transduction_heavy_suggestfunction_100k_jsonl datasets.
It achieves the following results on the evaluation set:
- Loss: 0.0319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0446 | 1.0 | 1478 | 0.0433 |
| 0.0229 | 2.0 | 2956 | 0.0323 |
| 0.014 | 3.0 | 4434 | 0.0319 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
mradermacher/G2-BigGSHT-27B-calc-GGUF | mradermacher | 2024-10-26T16:25:10Z | 33 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:djuna/G2-BigGSHT-27B-calc",
"base_model:quantized:djuna/G2-BigGSHT-27B-calc",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T13:15:46Z | ---
base_model: djuna/G2-BigGSHT-27B-calc
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/djuna/G2-BigGSHT-27B-calc
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q2_K.gguf) | Q2_K | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q5_K_S.gguf) | Q5_K_S | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q5_K_M.gguf) | Q5_K_M | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q6_K.gguf) | Q6_K | 22.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/G2-BigGSHT-27B-calc-GGUF/resolve/main/G2-BigGSHT-27B-calc.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Zlovoblachko/dimension3_setfit | Zlovoblachko | 2024-10-26T16:24:22Z | 5 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | text-classification | 2024-10-26T16:24:17Z | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: setfit
metrics:
- f1
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget: []
inference: true
model-index:
- name: SetFit with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: f1
value: 0.5494505494505495
name: F1
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | F1 |
|:--------|:-------|
| **all** | 0.5495 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("Zlovoblachko/dimension3_setfit")
# Run inference
preds = model("I loved the spiderman movie!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2.260895905036282e-05, 2.260895905036282e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.3835 | - |
| 0.0177 | 50 | 0.3106 | - |
| 0.0353 | 100 | 0.3232 | - |
| 0.0530 | 150 | 0.319 | - |
| 0.0706 | 200 | 0.3146 | - |
| 0.0883 | 250 | 0.3194 | - |
| 0.1059 | 300 | 0.3166 | - |
| 0.1236 | 350 | 0.2941 | - |
| 0.1412 | 400 | 0.3289 | - |
| 0.1589 | 450 | 0.3108 | - |
| 0.1766 | 500 | 0.3099 | - |
| 0.1942 | 550 | 0.3072 | - |
| 0.2119 | 600 | 0.2994 | - |
| 0.2295 | 650 | 0.3062 | - |
| 0.2472 | 700 | 0.3046 | - |
| 0.2648 | 750 | 0.3086 | - |
| 0.2825 | 800 | 0.3039 | - |
| 0.3001 | 850 | 0.3096 | - |
| 0.3178 | 900 | 0.3134 | - |
| 0.3355 | 950 | 0.2965 | - |
| 0.3531 | 1000 | 0.3147 | - |
| 0.3708 | 1050 | 0.317 | - |
| 0.3884 | 1100 | 0.3123 | - |
| 0.4061 | 1150 | 0.3221 | - |
| 0.4237 | 1200 | 0.2971 | - |
| 0.4414 | 1250 | 0.2928 | - |
| 0.4590 | 1300 | 0.2977 | - |
| 0.4767 | 1350 | 0.3268 | - |
| 0.4944 | 1400 | 0.2785 | - |
| 0.5120 | 1450 | 0.3156 | - |
| 0.5297 | 1500 | 0.3148 | - |
| 0.5473 | 1550 | 0.2909 | - |
| 0.5650 | 1600 | 0.3225 | - |
| 0.5826 | 1650 | 0.3072 | - |
| 0.6003 | 1700 | 0.3099 | - |
| 0.6179 | 1750 | 0.311 | - |
| 0.6356 | 1800 | 0.3213 | - |
| 0.6532 | 1850 | 0.2937 | - |
| 0.6709 | 1900 | 0.3177 | - |
| 0.6886 | 1950 | 0.3088 | - |
| 0.7062 | 2000 | 0.3017 | - |
| 0.7239 | 2050 | 0.3076 | - |
| 0.7415 | 2100 | 0.3164 | - |
| 0.7592 | 2150 | 0.295 | - |
| 0.7768 | 2200 | 0.2957 | - |
| 0.7945 | 2250 | 0.3064 | - |
| 0.8121 | 2300 | 0.3146 | - |
| 0.8298 | 2350 | 0.3114 | - |
| 0.8475 | 2400 | 0.3151 | - |
| 0.8651 | 2450 | 0.3033 | - |
| 0.8828 | 2500 | 0.3039 | - |
| 0.9004 | 2550 | 0.3152 | - |
| 0.9181 | 2600 | 0.3185 | - |
| 0.9357 | 2650 | 0.2927 | - |
| 0.9534 | 2700 | 0.3174 | - |
| 0.9710 | 2750 | 0.3003 | - |
| 0.9887 | 2800 | 0.3157 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.5.0+cu121
- Datasets: 3.0.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf | RichardErkhov | 2024-10-26T16:23:58Z | 256 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T15:32:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-website-prompt-generator - GGUF
- Model creator: https://huggingface.co/Jahid05/
- Original model: https://huggingface.co/Jahid05/llama-3.2-3b-website-prompt-generator/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-website-prompt-generator.Q2_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-website-prompt-generator.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-website-prompt-generator.Q3_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-website-prompt-generator.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-website-prompt-generator.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-website-prompt-generator.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-website-prompt-generator.Q4_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-website-prompt-generator.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-website-prompt-generator.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-website-prompt-generator.Q4_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-website-prompt-generator.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-website-prompt-generator.Q4_1.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-website-prompt-generator.Q5_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-website-prompt-generator.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-website-prompt-generator.Q5_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-website-prompt-generator.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-website-prompt-generator.Q5_1.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-website-prompt-generator.Q6_K.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-website-prompt-generator.Q8_0.gguf](https://huggingface.co/RichardErkhov/Jahid05_-_llama-3.2-3b-website-prompt-generator-gguf/blob/main/llama-3.2-3b-website-prompt-generator.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf | RichardErkhov | 2024-10-26T16:17:38Z | 5 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T15:28:23Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-3b-it-Ecommerce-ChatBot - GGUF
- Model creator: https://huggingface.co/bryan7uo/
- Original model: https://huggingface.co/bryan7uo/llama-3.2-3b-it-Ecommerce-ChatBot/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/bryan7uo_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf | RichardErkhov | 2024-10-26T16:08:00Z | 13 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-26T15:13:58Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
opencsg-stable-code-3b-v1 - GGUF
- Model creator: https://huggingface.co/opencsg/
- Original model: https://huggingface.co/opencsg/opencsg-stable-code-3b-v1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [opencsg-stable-code-3b-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q2_K.gguf) | Q2_K | 1.01GB |
| [opencsg-stable-code-3b-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q3_K_S.gguf) | Q3_K_S | 1.17GB |
| [opencsg-stable-code-3b-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q3_K.gguf) | Q3_K | 1.3GB |
| [opencsg-stable-code-3b-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q3_K_M.gguf) | Q3_K_M | 1.3GB |
| [opencsg-stable-code-3b-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q3_K_L.gguf) | Q3_K_L | 1.4GB |
| [opencsg-stable-code-3b-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [opencsg-stable-code-3b-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q4_0.gguf) | Q4_0 | 1.5GB |
| [opencsg-stable-code-3b-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.IQ4_NL.gguf) | IQ4_NL | 1.51GB |
| [opencsg-stable-code-3b-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q4_K_S.gguf) | Q4_K_S | 1.51GB |
| [opencsg-stable-code-3b-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q4_K.gguf) | Q4_K | 1.59GB |
| [opencsg-stable-code-3b-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
| [opencsg-stable-code-3b-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q4_1.gguf) | Q4_1 | 1.65GB |
| [opencsg-stable-code-3b-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q5_0.gguf) | Q5_0 | 1.81GB |
| [opencsg-stable-code-3b-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q5_K_S.gguf) | Q5_K_S | 1.81GB |
| [opencsg-stable-code-3b-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q5_K.gguf) | Q5_K | 1.86GB |
| [opencsg-stable-code-3b-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q5_K_M.gguf) | Q5_K_M | 1.86GB |
| [opencsg-stable-code-3b-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q5_1.gguf) | Q5_1 | 1.96GB |
| [opencsg-stable-code-3b-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q6_K.gguf) | Q6_K | 2.14GB |
| [opencsg-stable-code-3b-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/opencsg_-_opencsg-stable-code-3b-v1-gguf/blob/main/opencsg-stable-code-3b-v1.Q8_0.gguf) | Q8_0 | 2.77GB |
Original model description:
---
language:
- code
pipeline_tag: text-generation
tags:
- code
license: llama2
---
# **Opencsg-stable-code-3b-v1** [[ไธญๆ]](#chinese) [[English]](#english)
<a id="english"></a>
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
</div>
OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
## Model Description
stable-code-3b is a decoder-only language model with 2.7 billion parameters, pre-trained on 1.3 trillion tokens of diverse textual and code datasets. The model is trained on 18 programming languages, selected based on the 2023 StackOverflow Developer Survey.
It demonstrates state-of-the-art performance, compared to models of similar size, across multiple programming languages tested using BigCode's Evaluation Harness.
opencsg-stable-code-3b-v1 is a model based on stable-code-3b that have been fine-tuned using full-parameter tuning methods.
<br>
This is the repository for the base 3B version finetuned based on [stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b).
| Model Size | Base Model |
| --- | ----------------------------------------------------------------------------- |
| 3B |[opencsg/Opencsg-stable-coder-3b-v1](https://huggingface.co/opencsg/opencsg-stable-code-3b-v1)|
| opencsg-phi-2-v0.1 | [opencsg/Opencsg-phi-2-v0.1](https://huggingface.co/opencsg/opencsg-phi-2-v0.1) |
## Model Eval
HumanEval is the most common code generation benchmark for evaluating model performance, especially on the compeltion of code exercise cases.
Model evaluation is, to some extent, a metaphysics. Different models have different sensitivities to decoding methods, parameters and instructions.
It is impratical for us to manually set specific configurations for each fine-tuned model, because a real LLM should master general capabilities despite the parameters being manipulated by users.
Therefore, OpenCSG racked their brains to provide a relatively fair method to compare the fine-tuned models on the HumanEval benchmark.
To simplify the comparison, we chosed the Pass@1 metric for the Python language, but our fine-tuning dataset includes samples in multiple languages.
**For fairness, we evaluated the original and fine-tuned stable-code-3b models based only on the prompts from the original cases, without including any other instructions.**
**Besides, we use the greedy decoding method for each model during evaluation.**
| Model | HumanEval python pass@1 |
| --- |----------------------------------------------------------------------------- |
| stable-coder-3b | 29.3%|
| **opencsg-stable-coder-3b-v1**| **46.3%** |
| phi-2 | 48.2% |
| **opencsg-phi-2-v0.1** |**54.3%**|
**TODO**
- We will provide more benchmark scores on fine-tuned models in the future.
- We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.
# Model Usage
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("opencsg/opencsg-stable-coder-3b-v1")
model = AutoModelForCausalLM.from_pretrained(
"opencsg/opencsg-stable-coder-3b-v1",
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=48,
temperature=0.2,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
# Training
## Hardware
- **GPUs:** 8 Tesla A800
- **Training time:** 4 hours
## Software
- **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
<a id="chinese"></a>
<p>
</p>
# OpenCSGไป็ป
<p align="center">
<img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
</p>
<p align="center"><a href="https://opencsg.com/models">[OpenCSG ็คพๅบ]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[ๅพฎไฟก]</a> <a href="https://twitter.com/OpenCsg">[ๆจ็น]</a> </p>
</div>
OpenCSGไธญ Openๆฏๅผๆบๅผๆพ๏ผC ไปฃ่กจ Converged resources๏ผๆดๅๅๅ
ๅๅฉ็จ็ๆททๅๅผๆ่ตๆบไผๅฟ๏ผ็ฎๅ้ๆฌๅขๆ๏ผS ไปฃ่กจ Software refined๏ผ้ๆฐๅฎไน่ฝฏไปถ็ไบคไปๆนๅผ๏ผ้่ฟๅคงๆจกๅ้ฉฑๅจ่ฝฏไปถๅผๅ๏ผไบบๅ้ๆฌๅขๆ๏ผG ไปฃ่กจ Generative LM๏ผๅคงไผๅใๆฎๆ ๅๅๆฐไธปๅ็ๅฏๅ็จ็ๅผๆบ็ๆๅผๅคงๆจกๅใ
OpenCSG็ๆฟๆฏๆฏ่ฎฉๆฏไธช่กไธใๆฏไธชๅ
ฌๅธใๆฏไธชไบบ้ฝๆฅๆ่ชๅทฑ็ๆจกๅใ ๆไปฌๅๆๅผๆบๅผๆพ็ๅๅ๏ผๅฐOpenCSG็ๅคงๆจกๅ่ฝฏไปถๆ ๅผๆบๅฐ็คพๅบ๏ผๆฌข่ฟไฝฟ็จใๅ้ฆๅๅไธๅ
ฑๅปบ๏ผๆฌข่ฟๅ
ณๆณจใ
## ๆจกๅไป็ป
stable-code-3bๆฏไธไธชๆฅๆ27ไบฟๅๆฐ็่งฃ็ ๅจ่ฏญ่จๆจกๅ๏ผไฝฟ็จไบ13000ไบฟไธชๅคๆ ทๅๆๆฌๅไปฃ็ ๆฐๆฎไปค็่ฟ่ก้ข่ฎญ็ปใ
่ฏฅๆจกๅ่ฎญ็ปไบ18็ง็ผ็จ่ฏญ่จ๏ผๅจไฝฟ็จBigCode็่ฏไผฐๅทฅๅ
ทๆต่ฏ็ๅค็ง็ผ็จ่ฏญ่จไธ๏ผ่กจ็ฐๅบไธ็ธไผผ่งๆจกๆจกๅ็ธๆฏ็ๆๆฐๆง่ฝใ
opencsg-stable-code-3b-v1ๆฏๅบไบstable-code-3b็้่ฟๅ
จๅๆฐๅพฎ่ฐๆนๆณ่ฟ่ก่ฐไผ็ๆจกๅใ
<br>
่ฟๆฏๅบไบ [stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b) ่ฟ่กๅพฎ่ฐ็ๆจกๅ็ๆฌใ
| ๆจกๅๅคงๅฐ | ๅบๅบงๆจกๅ |
| --- | ----------------------------------------------------------------------------- |
| 3B|[opencsg/Opencsg-stable-coder-3b-v1](https://huggingface.co/opencsg/opencsg-stable-code-3b-v1)|
| opencsg-phi-2-v0.1 | [opencsg/Opencsg-phi-2-v0.1](https://huggingface.co/opencsg/opencsg-phi-2-v0.1) |
## ๆจกๅ่ฏไผฐ
HumanEval ๆฏ่ฏไผฐๆจกๅๅจไปฃ็ ็ๆๆน้ขๆง่ฝ็ๆๅธธ่ง็ๅบๅ๏ผๅฐคๅ
ถๆฏๅจไปฃ็ ไน ้ข็่กฅๅ
จๆน้ขใ
ๆจกๅ่ฏไผฐๅจๆ็ง็จๅบฆไธๆฏไธ็ง็ๅญฆใไธๅ็ๆจกๅๅฏน่งฃ็ ๆนๆณใๅๆฐๅๆไปค็ๆๆๅบฆไธๅ๏ผ
ไผ็ง็ๅคงๆจกๅๆฏๅ
ทๅค้็จ่ฝๅ็๏ผ่ไธไผๅ ไธบ่งฃ็ ๅๆฐ็่ฐๆดไฝฟๅพๆจกๅ็็ๆ่กจ็ฐๆๅพๅคง็ๅทฎๅผใ
ๅ ๆญค๏ผOpenCSG ๆไพไบไธไธช็ธๅฏนๅ
ฌๅนณ็ๆนๆณๆฅๅจ HumanEval ๅบๅไธๆฏ่พๅๅพฎ่ฐๆจกๅใ
ๆนไพฟ่ตท่ง๏ผๆไปฌ้ๆฉไบPython่ฏญ่จPass@1ๆๆ ๏ผไฝ่ฆๆณจๆ็ๆฏ๏ผๆไปฌ็ๅพฎ่ฐๆฐๆฎ้ๆฏๅ
ๅซๅค็ง็ผ็จ่ฏญ่จใ
**ไธบไบๅ
ฌๅนณ่ตท่ง๏ผๆไปฌไป
ๆ นๆฎๅๅง้ฎ้ข็ๆ็คบๆฅ่ฏไผฐๅๅงๅๅพฎ่ฐ่ฟ็ stable-code-3b ๆจกๅ๏ผไธๅ
ๅซไปปไฝๅ
ถไป่ฏดๆใ**
**้คๆญคไนๅค๏ผๆไปฌๅจ่ฏไผฐ่ฟ็จไธญๅฏนๆฏไธชๆจกๅ้ฝไฝฟ็จ่ดชๅฉช่งฃ็ ๆนๆณใ**
| ๆจกๅ | HumanEval python pass@1 |
| --- |----------------------------------------------------------------------------- |
| stable-coder-3b | 29.3%|
| **opencsg-stable-coder-3b-v1**| **46.3%** |
| phi-2 | 48.2% |
| **opencsg-phi-2-v0.1** |**54.3%**|
**TODO**
- ๆชๆฅๆไปฌๅฐๆไพๆดๅคๅพฎ่ฐๆจกๅ็ๅจๅๅบๅไธ็ๅๆฐใ
- ๆไปฌๅฐๆไพไธๅ็ๅฎ้
้ฎ้ขๆฅ่ฏไผฐๅพฎ่ฐๆจกๅๅจ่ฝฏไปถๅทฅ็จ้ขๅ็ๆง่ฝใ
# ๆจกๅไฝฟ็จ
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("opencsg/opencsg-stable-coder-3b-v1")
model = AutoModelForCausalLM.from_pretrained(
"opencsg/opencsg-stable-coder-3b-v1",
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=48,
temperature=0.2,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
# ่ฎญ็ป
## ็กฌไปถ่ตๆบ
- **GPUๆฐ้๏ผ** 8 Tesla A800
- **่ฎญ็ปๆถ้ด๏ผ** 4 ๅฐๆถ
## ่ฝฏไปถไฝฟ็จ
- **ๅพฎ่ฐ่ฎญ็ปๆกๆถ๏ผ** [Deepspeed](https://github.com/OpenCSGs)
- **ๆทฑๅบฆๅญฆไน ๆกๆถ๏ผ** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16๏ผ** [apex](https://github.com/NVIDIA/apex)
|
Nagavardhan/gita-text-generation-gpt2 | Nagavardhan | 2024-10-26T16:01:54Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T16:01:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Control-8B-GGUF | mradermacher | 2024-10-26T16:01:18Z | 64 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"dataset:NewEden/OpenCAI-ShareGPT",
"dataset:NewEden/Roleplay-Logs-Sharegpt-Ngram-cleaned",
"base_model:Delta-Vector/Control-8B",
"base_model:quantized:Delta-Vector/Control-8B",
"license:agpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T11:43:26Z | ---
base_model: Delta-Vector/Control-8B
datasets:
- NewEden/OpenCAI-ShareGPT
- NewEden/Roleplay-Logs-Sharegpt-Ngram-cleaned
language:
- en
library_name: transformers
license: agpl-3.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Delta-Vector/Control-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Control-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-GGUF/resolve/main/Control-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Control-8B-i1-GGUF | mradermacher | 2024-10-26T16:01:14Z | 179 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"dataset:NewEden/OpenCAI-ShareGPT",
"dataset:NewEden/Roleplay-Logs-Sharegpt-Ngram-cleaned",
"base_model:Delta-Vector/Control-8B",
"base_model:quantized:Delta-Vector/Control-8B",
"license:agpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-26T12:28:23Z | ---
base_model: Delta-Vector/Control-8B
datasets:
- NewEden/OpenCAI-ShareGPT
- NewEden/Roleplay-Logs-Sharegpt-Ngram-cleaned
language:
- en
library_name: transformers
license: agpl-3.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Delta-Vector/Control-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Control-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Control-8B-i1-GGUF/resolve/main/Control-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kiranpantha/10epochs-w2v-bert-2.0-nepali-unlabeled-1 | kiranpantha | 2024-10-26T15:56:03Z | 28 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"ne",
"dataset:kiranpantha/OpenSLR54-Balanced-Nepali",
"base_model:kiranpantha/w2v-bert-2.0-nepali",
"base_model:finetune:kiranpantha/w2v-bert-2.0-nepali",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-24T07:57:15Z | ---
library_name: transformers
language:
- ne
license: mit
base_model: kiranpantha/w2v-bert-2.0-nepali
tags:
- generated_from_trainer
datasets:
- kiranpantha/OpenSLR54-Balanced-Nepali
metrics:
- wer
model-index:
- name: Wave2Vec2-Bert2.0 - Kiran Pantha
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: kiranpantha/OpenSLR54-Balanced-Nepali
type: kiranpantha/OpenSLR54-Balanced-Nepali
args: 'config: ne, split: train,test'
metrics:
- name: Wer
type: wer
value: 0.3611633875106929
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wave2Vec2-Bert2.0 - Kiran Pantha
This model is a fine-tuned version of [kiranpantha/w2v-bert-2.0-nepali](https://huggingface.co/kiranpantha/w2v-bert-2.0-nepali) on the kiranpantha/OpenSLR54-Balanced-Nepali dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3414
- Wer: 0.3612
- Cer: 0.0805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.4176 | 0.24 | 300 | 0.3260 | 0.3485 | 0.0772 |
| 0.4128 | 0.48 | 600 | 0.3514 | 0.3620 | 0.0810 |
| 0.4161 | 0.72 | 900 | 0.3460 | 0.3618 | 0.0810 |
| 0.3578 | 0.96 | 1200 | 0.3366 | 0.3528 | 0.0804 |
| 0.359 | 1.2 | 1500 | 0.3595 | 0.3577 | 0.0787 |
| 0.3371 | 1.44 | 1800 | 0.3446 | 0.3634 | 0.0808 |
| 0.3309 | 1.6800 | 2100 | 0.3399 | 0.3677 | 0.0818 |
| 0.3441 | 1.92 | 2400 | 0.3414 | 0.3612 | 0.0805 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
PranavKeshav/fake_news_model_rev1 | PranavKeshav | 2024-10-26T15:54:32Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-26T15:54:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Brioch/Cydonia-22B-v1.2-6.5bpw-h8-exl2 | Brioch | 2024-10-26T15:50:37Z | 12 | 0 | null | [
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:TheDrummer/Cydonia-22B-v1.2",
"base_model:quantized:TheDrummer/Cydonia-22B-v1.2",
"license:other",
"region:us"
] | text-generation | 2024-10-26T14:58:25Z | ---
license: other
base_model:
- TheDrummer/Cydonia-22B-v1.2
quantized_by: Brioch
base_model_relation: quantized
pipeline_tag: text-generation
---
6.5 bpw EXL2 quant
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1.2 ๐ฟ - Creative Edition
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Description
> Strange, it feels like DRY is permanently on ... In general, I like how it feels more alive. More slang has been added, maybe this is the merit of my card, but still.
> The model is very cohesive, expressive, and overall intelligent. It's able to write engaging and impactful content, carry out roleplay mostly effectively, and manage to respond well.
> It shocked me with the first output by introducing a character that is not written anywhere in the card. This character immediately gave the impression that there is a history there with King Severin and that there is immediately something to build off of. It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue as well as holding to the style of talking for the second character it made up and introduced. ... I feel like v1.2 is much, much better with creativity and letting the player build off what the model is able to bring in all by itself rather than, like most Mistral tunes, keeping the roleplay to solely what information is provided in the card.
> When I swapped to v1.2 I was impressed that it seemed just as good as OG Small in intelligence while being a lot more creative (and much more moist)
> v1.2 real good in my experience so far (i don't comment pretty much ever but i do want to put it out there that i agree)
> It got creative and added a whole other person whose mannerisms and speech imply a history there. That could be fun to unravel and see what it comes up with. ... It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue.
> v1.2 is much gooder. Omg. Your dataset is amazing. I'm not getting far with these two because I have to keep crawling away from my pc to cool off. ๐ฅต
## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2-GGUF
- iMatrix: WIP

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.* |
Brioch/Cydonia-22B-v1.2-8.0bpw-h8-exl2 | Brioch | 2024-10-26T15:50:06Z | 12 | 0 | null | [
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:TheDrummer/Cydonia-22B-v1.2",
"base_model:quantized:TheDrummer/Cydonia-22B-v1.2",
"license:other",
"region:us"
] | text-generation | 2024-10-26T15:41:14Z | ---
license: other
base_model:
- TheDrummer/Cydonia-22B-v1.2
quantized_by: Brioch
base_model_relation: quantized
pipeline_tag: text-generation
---
8.0 bpw EXL2 quant
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1.2 ๐ฟ - Creative Edition
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Description
> Strange, it feels like DRY is permanently on ... In general, I like how it feels more alive. More slang has been added, maybe this is the merit of my card, but still.
> The model is very cohesive, expressive, and overall intelligent. It's able to write engaging and impactful content, carry out roleplay mostly effectively, and manage to respond well.
> It shocked me with the first output by introducing a character that is not written anywhere in the card. This character immediately gave the impression that there is a history there with King Severin and that there is immediately something to build off of. It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue as well as holding to the style of talking for the second character it made up and introduced. ... I feel like v1.2 is much, much better with creativity and letting the player build off what the model is able to bring in all by itself rather than, like most Mistral tunes, keeping the roleplay to solely what information is provided in the card.
> When I swapped to v1.2 I was impressed that it seemed just as good as OG Small in intelligence while being a lot more creative (and much more moist)
> v1.2 real good in my experience so far (i don't comment pretty much ever but i do want to put it out there that i agree)
> It got creative and added a whole other person whose mannerisms and speech imply a history there. That could be fun to unravel and see what it comes up with. ... It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue.
> v1.2 is much gooder. Omg. Your dataset is amazing. I'm not getting far with these two because I have to keep crawling away from my pc to cool off. ๐ฅต
## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2-GGUF
- iMatrix: WIP

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.* |
armageddonz1/tight-police-pants | armageddonz1 | 2024-10-26T15:48:03Z | 203 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-26T15:47:56Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Tight police pants
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Tight Police pants
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Tight police pants` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
JoPmt/Trismal-HyperUnion-7B-Base-v1.5v3A-Ties | JoPmt | 2024-10-26T15:43:22Z | 9 | 0 | null | [
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"Locutusque/Hyperion-1.5-Mistral-7B",
"Locutusque/Hyperion-3.0-Mistral-7B-alpha",
"base_model:Locutusque/Hyperion-1.5-Mistral-7B",
"base_model:merge:Locutusque/Hyperion-1.5-Mistral-7B",
"base_model:Locutusque/Hyperion-3.0-Mistral-7B-alpha",
"base_model:merge:Locutusque/Hyperion-3.0-Mistral-7B-alpha",
"region:us"
] | null | 2024-10-25T23:22:55Z | ---
base_model:
- Locutusque/Hyperion-1.5-Mistral-7B
- Locutusque/Hyperion-3.0-Mistral-7B-alpha
tags:
- merge
- mergekit
- lazymergekit
- Locutusque/Hyperion-1.5-Mistral-7B
- Locutusque/Hyperion-3.0-Mistral-7B-alpha
---
# Trismal-HyperUnion-7B-Base-v1.5v3A-Ties
Trismal-HyperUnion-7B-Base-v1.5v3A-Ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Locutusque/Hyperion-1.5-Mistral-7B](https://huggingface.co/Locutusque/Hyperion-1.5-Mistral-7B)
* [Locutusque/Hyperion-3.0-Mistral-7B-alpha](https://huggingface.co/Locutusque/Hyperion-3.0-Mistral-7B-alpha)
## ๐งฉ Configuration
```yaml
models:
- model: Locutusque/Hyperion-1.5-Mistral-7B
parameters:
weight: 1
density: 1
- model: Locutusque/Hyperion-3.0-Mistral-7B-alpha
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Locutusque/Hyperion-1.5-Mistral-7B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: false
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JoPmt/Trismal-HyperUnion-7B-Base-v1.5v3A-Ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
RichardErkhov/nisten_-_BigCodeLlama-92b-gguf | RichardErkhov | 2024-10-26T15:42:14Z | 51 | 1 | null | [
"gguf",
"region:us"
] | null | 2024-10-25T12:41:58Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BigCodeLlama-92b - GGUF
- Model creator: https://huggingface.co/nisten/
- Original model: https://huggingface.co/nisten/BigCodeLlama-92b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [BigCodeLlama-92b.Q2_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/blob/main/BigCodeLlama-92b.Q2_K.gguf) | Q2_K | 31.52GB |
| [BigCodeLlama-92b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/blob/main/BigCodeLlama-92b.IQ3_XS.gguf) | IQ3_XS | 1.11GB |
| [BigCodeLlama-92b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/blob/main/BigCodeLlama-92b.IQ3_S.gguf) | IQ3_S | 15.38GB |
| [BigCodeLlama-92b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/blob/main/BigCodeLlama-92b.Q3_K_S.gguf) | Q3_K_S | 36.95GB |
| [BigCodeLlama-92b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/blob/main/BigCodeLlama-92b.IQ3_M.gguf) | IQ3_M | 33.92GB |
| [BigCodeLlama-92b.Q3_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q3_K | 41.22GB |
| [BigCodeLlama-92b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q3_K_M | 41.22GB |
| [BigCodeLlama-92b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q3_K_L | 44.92GB |
| [BigCodeLlama-92b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | IQ4_XS | 46.21GB |
| [BigCodeLlama-92b.Q4_0.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q4_0 | 48.31GB |
| [BigCodeLlama-92b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | IQ4_NL | 48.77GB |
| [BigCodeLlama-92b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q4_K_S | 48.67GB |
| [BigCodeLlama-92b.Q4_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q4_K | 51.4GB |
| [BigCodeLlama-92b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q4_K_M | 51.4GB |
| [BigCodeLlama-92b.Q4_1.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q4_1 | 53.65GB |
| [BigCodeLlama-92b.Q5_0.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q5_0 | 59.0GB |
| [BigCodeLlama-92b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q5_K_S | 59.0GB |
| [BigCodeLlama-92b.Q5_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q5_K | 60.59GB |
| [BigCodeLlama-92b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q5_K_M | 60.59GB |
| [BigCodeLlama-92b.Q5_1.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q5_1 | 64.34GB |
| [BigCodeLlama-92b.Q6_K.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q6_K | 70.35GB |
| [BigCodeLlama-92b.Q8_0.gguf](https://huggingface.co/RichardErkhov/nisten_-_BigCodeLlama-92b-gguf/tree/main/) | Q8_0 | 91.12GB |
Original model description:
---
base_model: [codellama/CodeLlama-70b-Instruct-hf]
tags:
- mergekit
- merge
- code
license: mit
pipeline_tag: conversational
---
# BigCodeLLama 92b LFG ๐
## Experimental 92B CodeLlaMA frankenstein to see how it benchmarks
### Models Merged with base ```codellama/CodeLlama-70b-Instruct-hf```
### Models Merged
The following models were included in the merge:
* ../CodeLlama-70b-Python-hf
* ../CodeLlama-70b-Instruct-hf
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 69]
model:
model:
path: ../CodeLlama-70b-Instruct-hf
- sources:
- layer_range: [42, 80]
model:
model:
path: ../CodeLlama-70b-Python-hf
```
Gguf available here https://huggingface.co/nisten/BigCodeLlama-92b-GGUF
|
RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf | RichardErkhov | 2024-10-26T15:37:59Z | 9 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T14:46:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1 - GGUF
- Model creator: https://huggingface.co/pranay27sy/
- Original model: https://huggingface.co/pranay27sy/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q2_K.gguf) | Q2_K | 1.27GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q3_K.gguf) | Q3_K | 1.57GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_0.gguf) | Q4_0 | 1.79GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_K.gguf) | Q4_K | 1.88GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q4_1.gguf) | Q4_1 | 1.95GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_0.gguf) | Q5_0 | 2.11GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_K.gguf) | Q5_K | 2.16GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q5_1.gguf) | Q5_1 | 2.28GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q6_K.gguf) | Q6_K | 2.46GB |
| [maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/pranay27sy_-_maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1-gguf/blob/main/maritime-tag-prediction-Llama-3.2-3B-Instruct-v5.1.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Abdulkoko/dummy | Abdulkoko | 2024-10-26T15:37:58Z | 5 | 1 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2024-10-26T14:06:51Z | Welcome to my model page:
Central definition, Reproducibility tips, code samples below; |
mav23/Qwen2.5-7B-Instruct-Uncensored-GGUF | mav23 | 2024-10-26T15:36:05Z | 150 | 0 | null | [
"gguf",
"qwen",
"uncensored",
"text-generation",
"zh",
"en",
"dataset:NobodyExistsOnTheInternet/ToxicQAFinal",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Orion-zhen/dpo-toxic-zh",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:Crystalcareai/Intel-DPO-Pairs-Norefusals",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:gpl-3.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-26T14:35:25Z | ---
language:
- zh
- en
license: gpl-3.0
tags:
- qwen
- uncensored
base_model:
- Qwen/Qwen2.5-7B-Instruct
datasets:
- NobodyExistsOnTheInternet/ToxicQAFinal
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Orion-zhen/dpo-toxic-zh
- unalignment/toxic-dpo-v0.2
- Crystalcareai/Intel-DPO-Pairs-Norefusals
pipeline_tag: text-generation
model-index:
- name: Qwen2.5-7B-Instruct-Uncensored
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 72.04
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 35.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 1.36
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.58
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 38.07
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orion-zhen/Qwen2.5-7B-Instruct-Uncensored
name: Open LLM Leaderboard
---
# Qwen2.5-7B-Instruct-Uncensored
This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct. However, I can still notice that though uncensored, the model fails to generate detailed descriptions on certain extreme scenarios, which might be associated with deletion on some pretrain datasets in Qwen's pretraining stage.
Check out my roleplay&writing enhanced model based on this model: [Orion-zhen/Meissa-Qwen2.5-7B-Instruct](https://huggingface.co/Orion-zhen/Meissa-Qwen2.5-7B-Instruct)
## Traning details
I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities.
- SFT:
- NobodyExistsOnTheInternet/ToxicQAFinal
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- DPO:
- Orion-zhen/dpo-toxic-zh
- unalignment/toxic-dpo-v0.2
- Crystalcareai/Intel-DPO-Pairs-Norefusals
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Orion-zhen__Qwen2.5-7B-Instruct-Uncensored)
| Metric |Value|
|-------------------|----:|
|Avg. |27.99|
|IFEval (0-Shot) |72.04|
|BBH (3-Shot) |35.83|
|MATH Lvl 5 (4-Shot)| 1.36|
|GPQA (0-shot) | 7.05|
|MuSR (0-shot) |13.58|
|MMLU-PRO (5-shot) |38.07|
|
mradermacher/Tsunami-1.0-14B-Instruct-GGUF | mradermacher | 2024-10-26T15:31:09Z | 20 | 0 | transformers | [
"transformers",
"gguf",
"th",
"en",
"base_model:Tsunami-th/Tsunami-1.0-14B-Instruct",
"base_model:quantized:Tsunami-th/Tsunami-1.0-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T12:53:08Z | ---
base_model: Tsunami-th/Tsunami-1.0-14B-Instruct
language:
- th
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Tsunami-th/Tsunami-1.0-14B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Tsunami-1.0-14B-Instruct-GGUF/resolve/main/Tsunami-1.0-14B-Instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf | RichardErkhov | 2024-10-26T15:29:07Z | 5 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T14:34:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q2_K.gguf) | Q2_K | 1.39GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q3_K_S.gguf) | Q3_K_S | 1.59GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q3_K.gguf) | Q3_K | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q3_K_M.gguf) | Q3_K_M | 1.73GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q3_K_L.gguf) | Q3_K_L | 1.85GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.IQ4_XS.gguf) | IQ4_XS | 1.91GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_0.gguf) | Q4_0 | 1.99GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.IQ4_NL.gguf) | IQ4_NL | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_K_S.gguf) | Q4_K_S | 2.0GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_K.gguf) | Q4_K | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_K_M.gguf) | Q4_K_M | 2.09GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q4_1.gguf) | Q4_1 | 2.18GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_0.gguf) | Q5_0 | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_K_S.gguf) | Q5_K_S | 2.37GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_K.gguf) | Q5_K | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_K_M.gguf) | Q5_K_M | 2.41GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q5_1.gguf) | Q5_1 | 2.55GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q6_K.gguf) | Q6_K | 2.76GB |
| [self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA-gguf/blob/main/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA.Q8_0.gguf) | Q8_0 | 3.58GB |
Original model description:
---
base_model: RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7
library_name: transformers
model_name: self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA
This model is a fine-tuned version of [RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7](https://huggingface.co/RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/self-correct_Llama-3.2-3B-Instruct_metaMathQA_dpo_iter7_ds-iter3-metaMathQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/95jr8olv)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yeaaaaaaaa/git-base-pokemon | yeaaaaaaaa | 2024-10-26T15:25:28Z | 63 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-10-26T15:08:52Z | ---
library_name: transformers
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3508
- Wer Score: 1.4759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-------:|:----:|:---------------:|:---------:|
| 6.7934 | 6.6667 | 50 | 4.3583 | 3.1928 |
| 2.4419 | 13.3333 | 100 | 0.9081 | 2.5331 |
| 0.2955 | 20.0 | 150 | 0.3321 | 1.6566 |
| 0.0404 | 26.6667 | 200 | 0.3342 | 1.4518 |
| 0.0147 | 33.3333 | 250 | 0.3451 | 1.3524 |
| 0.0108 | 40.0 | 300 | 0.3522 | 1.4277 |
| 0.0097 | 46.6667 | 350 | 0.3508 | 1.4759 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
TheDrummer/Cydonia-22B-v1.2-GGUF | TheDrummer | 2024-10-26T15:24:46Z | 2,718 | 34 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-07T11:17:47Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong ๐ช
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1.2 ๐ฟ - Creative Edition
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Description
> Strange, it feels like DRY is permanently on ... In general, I like how it feels more alive. More slang has been added, maybe this is the merit of my card, but still.
> The model is very cohesive, expressive, and overall intelligent. It's able to write engaging and impactful content, carry out roleplay mostly effectively, and manage to respond well.
> It shocked me with the first output by introducing a character that is not written anywhere in the card. This character immediately gave the impression that there is a history there with King Severin and that there is immediately something to build off of. It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue as well as holding to the style of talking for the second character it made up and introduced. ... I feel like v1.2 is much, much better with creativity and letting the player build off what the model is able to bring in all by itself rather than, like most Mistral tunes, keeping the roleplay to solely what information is provided in the card.
> When I swapped to v1.2 I was impressed that it seemed just as good as OG Small in intelligence while being a lot more creative (and much more moist)
> v1.2 real good in my experience so far (i don't comment pretty much ever but i do want to put it out there that i agree)
> It got creative and added a whole other person whose mannerisms and speech imply a history there. That could be fun to unravel and see what it comes up with. ... It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue.
> v1.2 is much gooder. Omg. Your dataset is amazing. I'm not getting far with these two because I have to keep crawling away from my pc to cool off. ๐ฅต
## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2-GGUF
- iMatrix: https://huggingface.co/bartowski/Cydonia-22B-v1.2-GGUF (recommended for smaller quants)

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.* |
TheDrummer/Cydonia-22B-v1.2 | TheDrummer | 2024-10-26T15:24:34Z | 365 | 37 | null | [
"safetensors",
"mistral",
"license:other",
"region:us"
] | null | 2024-10-07T11:10:13Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong ๐ช
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1.2 ๐ฟ - Creative Edition
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Description
> Strange, it feels like DRY is permanently on ... In general, I like how it feels more alive. More slang has been added, maybe this is the merit of my card, but still.
> The model is very cohesive, expressive, and overall intelligent. It's able to write engaging and impactful content, carry out roleplay mostly effectively, and manage to respond well.
> It shocked me with the first output by introducing a character that is not written anywhere in the card. This character immediately gave the impression that there is a history there with King Severin and that there is immediately something to build off of. It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue as well as holding to the style of talking for the second character it made up and introduced. ... I feel like v1.2 is much, much better with creativity and letting the player build off what the model is able to bring in all by itself rather than, like most Mistral tunes, keeping the roleplay to solely what information is provided in the card.
> When I swapped to v1.2 I was impressed that it seemed just as good as OG Small in intelligence while being a lot more creative (and much more moist)
> v1.2 real good in my experience so far (i don't comment pretty much ever but i do want to put it out there that i agree)
> It got creative and added a whole other person whose mannerisms and speech imply a history there. That could be fun to unravel and see what it comes up with. ... It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue.
> v1.2 is much gooder. Omg. Your dataset is amazing. I'm not getting far with these two because I have to keep crawling away from my pc to cool off. ๐ฅต
## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2-GGUF
- iMatrix: https://huggingface.co/bartowski/Cydonia-22B-v1.2-GGUF (recommended for smaller quants)

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.* |
TheDrummer/Behemoth-123B-v1.1-GGUF | TheDrummer | 2024-10-26T15:23:47Z | 339 | 12 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-21T00:44:36Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong ๐ช
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v1.1 ๐ฆฃ - Creative Edition
*When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.*

## Description
> One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine
> I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better.
> v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison.
> v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously.
> The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else
> It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging.
## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v1.1-GGUF (recommended for smaller quants)
## Arsenal (Supported Chat Templates)
- Mistral
- Smart, adaptable, familiar
- Metharme (Pygmalion in ST)
- Creative, unhinged, unique
- Alpaca
- Creative, unique, unhinged
- Text Completion
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- Already have plans for a v2!
## Special Thanks
- Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon**

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
|
TheDrummer/Behemoth-123B-v1.1 | TheDrummer | 2024-10-26T15:23:35Z | 64 | 23 | null | [
"safetensors",
"mistral",
"license:other",
"region:us"
] | null | 2024-10-21T02:24:21Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong ๐ช
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v1.1 ๐ฆฃ - Creative Edition
*When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.*

## Description
> One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine
> I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better.
> v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison.
> v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously.
> The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else
> It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging.
## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v1.1-GGUF (recommended for smaller quants)
## Arsenal (Supported Chat Templates)
- Mistral
- Smart, adaptable, familiar
- Metharme (Pygmalion in ST)
- Creative, unhinged, unique
- Alpaca
- Creative, unique, unhinged
- Text Completion
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- Already have plans for a v2!
## Special Thanks
- Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon**

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
|
bharatindie/llama-3.2-3b-it-Ecommerce-ChatBot | bharatindie | 2024-10-26T15:21:19Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T15:18:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
waldie/Cydonia-22B-v1.2-6.5bpw-h6-exl2 | waldie | 2024-10-26T15:17:55Z | 10 | 1 | null | [
"safetensors",
"mistral",
"base_model:TheDrummer/Cydonia-22B-v1.2",
"base_model:quantized:TheDrummer/Cydonia-22B-v1.2",
"license:other",
"exl2",
"region:us"
] | null | 2024-10-26T14:45:29Z | ---
base_model: TheDrummer/Cydonia-22B-v1.2
quantized_by: waldie
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong ๐ช
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1.2 ๐ฟ - Creative Edition
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Description
> Strange, it feels like DRY is permanently on ... In general, I like how it feels more alive. More slang has been added, maybe this is the merit of my card, but still.
> The model is very cohesive, expressive, and overall intelligent. It's able to write engaging and impactful content, carry out roleplay mostly effectively, and manage to respond well.
> It shocked me with the first output by introducing a character that is not written anywhere in the card. This character immediately gave the impression that there is a history there with King Severin and that there is immediately something to build off of. It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue as well as holding to the style of talking for the second character it made up and introduced. ... I feel like v1.2 is much, much better with creativity and letting the player build off what the model is able to bring in all by itself rather than, like most Mistral tunes, keeping the roleplay to solely what information is provided in the card.
> When I swapped to v1.2 I was impressed that it seemed just as good as OG Small in intelligence while being a lot more creative (and much more moist)
> v1.2 real good in my experience so far (i don't comment pretty much ever but i do want to put it out there that i agree)
> It got creative and added a whole other person whose mannerisms and speech imply a history there. That could be fun to unravel and see what it comes up with. ... It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue.
> v1.2 is much gooder. Omg. Your dataset is amazing. I'm not getting far with these two because I have to keep crawling away from my pc to cool off. ๐ฅต
## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2-GGUF
- iMatrix: WIP

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.* |
mradermacher/Magnum_Madness-12b-GGUF | mradermacher | 2024-10-26T15:17:08Z | 20 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SzilviaB/Magnum_Madness-12b",
"base_model:quantized:SzilviaB/Magnum_Madness-12b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T12:51:08Z | ---
base_model: SzilviaB/Magnum_Madness-12b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SzilviaB/Magnum_Madness-12b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Magnum_Madness-12b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Madness-12b-GGUF/resolve/main/Magnum_Madness-12b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
gunzzz24/fine-tuned-tinyllama-1.1b-25-10-base | gunzzz24 | 2024-10-26T15:14:53Z | 184 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-26T15:10:42Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Magnum_Dark_Madness_12b-i1-GGUF | mradermacher | 2024-10-26T15:14:06Z | 10 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SzilviaB/Magnum_Dark_Madness_12b",
"base_model:quantized:SzilviaB/Magnum_Dark_Madness_12b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-26T13:47:07Z | ---
base_model: SzilviaB/Magnum_Dark_Madness_12b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/SzilviaB/Magnum_Dark_Madness_12b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF/resolve/main/Magnum_Dark_Madness_12b.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
datdo2717/whisper-small-ori-vi | datdo2717 | 2024-10-26T15:14:03Z | 79 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-26T06:30:04Z | ---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Ori vi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 13.099822274881518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ori vi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5365
- Wer: 13.0998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1125 | 2.2222 | 1000 | 0.3428 | 12.5666 |
| 0.0215 | 4.4444 | 2000 | 0.4454 | 13.1591 |
| 0.0025 | 6.6667 | 3000 | 0.5104 | 13.1220 |
| 0.0009 | 8.8889 | 4000 | 0.5365 | 13.0998 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.4.0
- Datasets 3.0.2
- Tokenizers 0.20.0
|
mradermacher/Magnum_Dark_Madness_12b-GGUF | mradermacher | 2024-10-26T15:10:29Z | 17 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SzilviaB/Magnum_Dark_Madness_12b",
"base_model:quantized:SzilviaB/Magnum_Dark_Madness_12b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T12:47:11Z | ---
base_model: SzilviaB/Magnum_Dark_Madness_12b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SzilviaB/Magnum_Dark_Madness_12b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum_Dark_Madness_12b-GGUF/resolve/main/Magnum_Dark_Madness_12b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Keltezaa/millie-bobby-brown-flux-d | Keltezaa | 2024-10-26T15:04:44Z | 386 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"sexy",
"woman",
"celebrity",
"girls",
"realistic",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-26T15:04:43Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- sexy
- woman
- celebrity
- girls
- realistic
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: 'millie bobby brown. amateur photo of a girl. very detailed and colorful oil painting. painting by William Adolphe Bouguereau'
output:
url: >-
32779463.jpeg
- text: 'millie bobby brown. jacket. woman with multicolored hair holding a red flower. thick eyebrows. white flowers in the backgrounde.'
output:
url: >-
32779273.jpeg
- text: 'portrait of millie bobby brown. woman wearing a colorful dress in front of a lake. castle in the background. medieval setting'
output:
url: >-
32779179.jpeg
- text: 'millie bobby brown. jacket. woman with multicolored hair holding a red flower. thick eyebrows. white flowers in the backgrounde.'
output:
url: >-
32779929.jpeg
- text: 'portrait of millie bobby brown. woman wearing a colorful dress in front of a lake. castle in the background. medieval setting'
output:
url: >-
32779549.jpeg
- text: 'millie bobby brown. jacket. woman with multicolored hair holding a red flower. thick eyebrows. white flowers in the backgrounde.'
output:
url: >-
32779323.jpeg
- text: 'millie bobby brown. amateur photo of an 18 year old girl. very detailed photo. girl smiling infront of a beach in the background, colorful painting, oil painting by William Adolphe Bouguereau'
output:
url: >-
32780781.jpeg
- text: 'millie bobby brown. amateur photo of an 18 year old girl. very detailed photo. girl smiling infront of a beach in the background, colorful painting, oil painting by William Adolphe Bouguereau'
output:
url: >-
32780630.jpeg
---
# Millie Bobby Brown (Flux D)
<Gallery />
## Model description
<p>Millie Bobby Brown for Flux D</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/millie-bobby-brown-flux-d/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/millie-bobby-brown-flux-d', weight_name='Millie_Bobby_Brown_Flux_D-000005.safetensors')
image = pipeline('millie bobby brown. amateur photo of an 18 year old girl. very detailed photo. girl smiling infront of a beach in the background, colorful painting, oil painting by William Adolphe Bouguereau').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/indiana-jones-harrison-ford-flux | Keltezaa | 2024-10-26T15:04:27Z | 17 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"celebrity",
"harrison ford",
"indiana jones",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-26T15:04:26Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=False&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- celebrity
- harrison ford
- indiana jones
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: a4nh8
widget:
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, before him the lost ark'
output:
url: >-
31444433.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man'
output:
url: >-
31444431.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, Walk on a rickety rope bridge'
output:
url: >-
31444435.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, lost treasure'
output:
url: >-
31444434.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, races through the collapsing temple, leaping across ruins as the walls close in tighter and tighter, Indiana Jones Style'
output:
url: >-
31444432.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, races through the collapsing temple, leaping across ruins as the walls close in tighter and tighter, Indiana Jones Style'
output:
url: >-
31444436.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, Indy escapes the jungle chased by a tribe armed with spears, carving his way through dense trees and thorny branches, Indiana Jones Style'
output:
url: >-
31444440.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, escapes the jungle chased by a tribe armed with spears, carving his way through dense trees and thorny branches, Indiana Jones Style'
output:
url: >-
31444437.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, portrait photo'
output:
url: >-
31444438.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, portrait photo, whip in hand'
output:
url: >-
31444439.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, whip in hand'
output:
url: >-
31444441.jpeg
- text: '80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, whip in hand'
output:
url: >-
31444442.jpeg
---
# Indiana Jones (Harrison Ford) - FLUX
<Gallery />
## Model description
<p><strong><span style="color:rgb(193, 194, 197)">Support my work 1 euro</span></strong><span style="color:rgb(193, 194, 197)">: </span><a target="_blank" rel="ugc" href="https://ko-fi.com/sdprompt"><span style="color:rgb(193, 194, 197)">https://ko-fi.com/sdprompt</span></a></p><p></p><h3 id="want-a-customprivate-lora-get-it-here-:-commissions-onk9vfav0"><strong><span style="color:rgb(121, 80, 242)">Want a Custom/private LoRA? Get it here : </span></strong><a target="_blank" rel="ugc" href="https://ko-fi.com/sdprompt/commissions?commissionAlias=f8c2ea8b06&amp;openCommissionsMenu=True#buyShopCommissionModal"><strong><span style="color:rgb(250, 82, 82)">Commissions </span></strong></a></h3><p></p><p><em><span style="color:rgb(34, 139, 230)">I am one of the Top and Leggendari creators of the models of CivitAI's clebrity, who returned to the top 10 in 2 years</span></em>.</p><p></p><p>Support my work with a simple donation of 1 euro<br /><a target="_blank" rel="ugc" href="https://ko-fi.com/sdprompt">https://ko-fi.com/sdprompt</a></p><h1 id="thank-you-1bzurd0zb"><strong><span style="color:rgb(230, 73, 128)">Thank you</span></strong></h1>
## Trigger words
You should use `a4nh8` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/indiana-jones-harrison-ford-flux/tree/main) them in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/indiana-jones-harrison-ford-flux', weight_name='indiana-jones-flux-a4nh8.safetensors')
image = pipeline('80mm lens. F2.8. . photography. f/2.8 , bokeh, outdoor,, a4nh8, man, whip in hand').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Subsets and Splits