modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
MaziyarPanahi/calme-3.1-instruct-3b-GGUF
|
MaziyarPanahi
| 2024-11-15T12:32:05Z
| 88
| 1
| null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:MaziyarPanahi/calme-3.1-instruct-3b",
"base_model:quantized:MaziyarPanahi/calme-3.1-instruct-3b",
"region:us",
"conversational"
] |
text-generation
| 2024-11-07T20:50:03Z
|
---
base_model: MaziyarPanahi/calme-3.1-instruct-3b
inference: false
model_creator: MaziyarPanahi
model_name: calme-3.1-instruct-3b-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/calme-3.1-instruct-3b-GGUF](https://huggingface.co/MaziyarPanahi/calme-3.1-instruct-3b-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/calme-3.1-instruct-3b](https://huggingface.co/MaziyarPanahi/calme-3.1-instruct-3b)
## Description
[MaziyarPanahi/calme-3.1-instruct-3b-GGUF](https://huggingface.co/MaziyarPanahi/calme-3.1-instruct-3b-GGUF) contains GGUF format model files for [MaziyarPanahi/calme-3.1-instruct-3b](https://huggingface.co/MaziyarPanahi/calme-3.1-instruct-3b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
bellmake/llama_pre_model
|
bellmake
| 2024-11-15T12:30:43Z
| 184
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T12:30:23Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jackycedar/Chinese-Ai-Meta-Llama-3.2-3B-GGUF
|
jackycedar
| 2024-11-15T12:27:34Z
| 5
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T12:26:59Z
|
---
base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jackycedar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dbmdz/bert-base-historic-dutch-cased
|
dbmdz
| 2024-11-15T12:18:17Z
| 123
| 2
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language: dutch
license: mit
widget:
- text: "de [MASK] vau Financien, in hec vorige jaar, da inkomswi"
---
# Language Model for Historic Dutch
In this repository we open source a language model for Historic Dutch, trained on the
[Delpher Corpus](https://www.delpher.nl/over-delpher/delpher-open-krantenarchief/download-teksten-kranten-1618-1879\),
that include digitized texts from Dutch newspapers, ranging from 1618 to 1879.
# Changelog
* 13.12.2021: Initial version of this repository.
# Model Zoo
The following models for Historic Dutch are available on the Hugging Face Model Hub:
| Model identifier | Model Hub link
| -------------------------------------- | -------------------------------------------------------------------
| `dbmdz/bert-base-historic-dutch-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-dutch-cased)
# Stats
The download urls for all archives can be found [here](delpher-corpus.urls).
We then used the awesome `alto-tools` from [this](https://github.com/cneud/alto-tools)
repository to extract plain text. The following table shows the size overview per year range:
| Period | Extracted plain text size
| --------- | -------------------------:
| 1618-1699 | 170MB
| 1700-1709 | 103MB
| 1710-1719 | 65MB
| 1720-1729 | 137MB
| 1730-1739 | 144MB
| 1740-1749 | 188MB
| 1750-1759 | 171MB
| 1760-1769 | 235MB
| 1770-1779 | 271MB
| 1780-1789 | 414MB
| 1790-1799 | 614MB
| 1800-1809 | 734MB
| 1810-1819 | 807MB
| 1820-1829 | 987MB
| 1830-1839 | 1.7GB
| 1840-1849 | 2.2GB
| 1850-1854 | 1.3GB
| 1855-1859 | 1.7GB
| 1860-1864 | 2.0GB
| 1865-1869 | 2.3GB
| 1870-1874 | 1.9GB
| 1875-1876 | 867MB
| 1877-1879 | 1.9GB
The total training corpus consists of 427,181,269 sentences and 3,509,581,683 tokens (counted via `wc`),
resulting in a total corpus size of 21GB.
The following figure shows an overview of the number of chars per year distribution:

# Language Model Pretraining
We use the official [BERT](https://github.com/google-research/bert) implementation using the following command
to train the model:
```bash
python3 run_pretraining.py --input_file gs://delpher-bert/tfrecords/*.tfrecord \
--output_dir gs://delpher-bert/bert-base-historic-dutch-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
We train the model for 3M steps using a total batch size of 128 on a v3-32 TPU. The pretraining loss curve can be seen
in the next figure:

# Evaluation
We evaluate our model on the preprocessed Europeana NER dataset for Dutch, that was presented in the
["Data Centric Domain Adaptation for Historical Text with OCR Errors"](https://github.com/stefan-it/historic-domain-adaptation-icdar) paper.
The data is available in their repository. We perform a hyper-parameter search for:
* Batch sizes: `[4, 8]`
* Learning rates: `[3e-5, 5e-5]`
* Number of epochs: `[5, 10]`
and report averaged F1-Score over 5 runs with different seeds. We also include [hmBERT](https://github.com/stefan-it/clef-hipe/blob/main/hlms.md) as baseline model.
Results:
| Model | F1-Score (Dev / Test)
| ------------------- | ---------------------
| hmBERT | (82.73) / 81.34
| Maerz et al. (2021) | - / 84.2
| Ours | (89.73) / 87.45
# License
All models are licensed under [MIT](LICENSE).
# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
We thank [Clemens Neudecker](https://github.com/cneud) for maintaining the amazing
[ALTO tools](https://github.com/cneud/alto-tools) that were used for parsing the Delpher Corpus XML files.
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF
|
MaziyarPanahi
| 2024-11-15T12:16:32Z
| 51
| 0
| null |
[
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:MaziyarPanahi/calme-3.1-llamaloi-3b",
"base_model:quantized:MaziyarPanahi/calme-3.1-llamaloi-3b",
"region:us",
"conversational"
] |
text-generation
| 2024-11-08T20:28:52Z
|
---
base_model: MaziyarPanahi/calme-3.1-llamaloi-3b
inference: false
model_creator: MaziyarPanahi
model_name: calme-3.1-llamaloi-3b-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF](https://huggingface.co/MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/calme-3.1-llamaloi-3b](https://huggingface.co/MaziyarPanahi/calme-3.1-llamaloi-3b)
## Description
[MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF](https://huggingface.co/MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF) contains GGUF format model files for [MaziyarPanahi/calme-3.1-llamaloi-3b](https://huggingface.co/MaziyarPanahi/calme-3.1-llamaloi-3b).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
rk2357281/Hindi_model2
|
rk2357281
| 2024-11-15T12:16:09Z
| 74
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-11-15T12:13:19Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rk2357281
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ericson333/zilky_one
|
ericson333
| 2024-11-15T12:14:09Z
| 20
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-15T11:12:03Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: zilky_one
---
# Zilky_One
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zilky_one` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ericson333/zilky_one', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/EVA-Tissint-14B-GGUF
|
mradermacher
| 2024-11-15T12:11:09Z
| 6
| 1
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ockerman0/EVA-Tissint-14B",
"base_model:quantized:ockerman0/EVA-Tissint-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T08:50:42Z
|
---
base_model: ockerman0/EVA-Tissint-14B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ockerman0/EVA-Tissint-14B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/EVA-Tissint-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q4_0_4_4.gguf) | Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EVA-Tissint-14B-GGUF/resolve/main/EVA-Tissint-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF
|
mradermacher
| 2024-11-15T11:59:04Z
| 88
| 0
|
transformers
|
[
"transformers",
"gguf",
"shining-valiant",
"shining-valiant-2",
"valiant",
"valiant-labs",
"llama",
"llama-3.1",
"llama-3.1-instruct",
"llama-3.1-instruct-8b",
"llama-3",
"llama-3-instruct",
"llama-3-instruct-8b",
"8b",
"science",
"physics",
"biology",
"chemistry",
"compsci",
"computer-science",
"engineering",
"technical",
"conversational",
"chat",
"instruct",
"en",
"dataset:sequelbox/Celestia",
"dataset:sequelbox/Spurline",
"dataset:sequelbox/Supernova",
"base_model:ValiantLabs/Llama3.1-8B-ShiningValiant2",
"base_model:quantized:ValiantLabs/Llama3.1-8B-ShiningValiant2",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-15T10:31:24Z
|
---
base_model: ValiantLabs/Llama3.1-8B-ShiningValiant2
datasets:
- sequelbox/Celestia
- sequelbox/Spurline
- sequelbox/Supernova
language:
- en
library_name: transformers
license: llama3.1
model_type: llama
quantized_by: mradermacher
tags:
- shining-valiant
- shining-valiant-2
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- science
- physics
- biology
- chemistry
- compsci
- computer-science
- engineering
- technical
- conversational
- chat
- instruct
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
FounderFeed/3dAnime-Style-flux-dev-lora
|
FounderFeed
| 2024-11-15T11:53:33Z
| 200
| 2
|
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-11-12T09:31:24Z
|
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: ' A 3D anime-style cityscape, intricate details, vivid colors'
output:
url: images/BBBB.png
- text: >-
A 3dstyle style Ultra-realistic anime-style portrait of a powerful Saiyan
warrior resembling Vegito from *Dragon Ball Super*, standing in a dramatic
pose with hands clasped together in a meditative yet intense stance. He has
tall, spiked blue hair that glows with an intense, radiant energy,
reflecting shades of neon blue and bright white. His expression is fierce
and focused, with sharp, intense eyes glowing red, conveying strength and an
unbreakable spirit. The character is shirtless, showcasing a muscular,
highly detailed physique, with veins and muscles realistically defined,
capturing the physical power and resilience of a Saiyan. His shoulders and
forearms are wrapped in golden-yellow energy armor that emits a molten glow,
appearing like lava with fiery reflections that highlight his form. Bright
energy flares around him, flickering with vibrant blue, red, and yellow
hues, creating a dynamic, electric aura of raw power. The background is a
temple-like setting with large, textured stone pillars and a dark, fiery
backdrop. Streams of red and orange energy cascade down from above, adding a
sense of intensity and danger to the scene. Small, glowing energy particles
float around him, some in motion, with a mix of sparks and embers that
amplify the character's aura. The lighting is dramatic and high-contrast,
casting sharp shadows that emphasize his muscular definition and highlight
his fierce, intense expression. The overall effect is a powerful,
otherworldly atmosphere, blending fiery and electric energy effects to give
a surreal, god-like presence.
output:
url: images/example_o7szit49s.png
- text: >-
A 3dstyle A hyper-realistic portrayal of Sakura Haruno from Naruto,
reimagined as a real-life woman. Her face is youthful and radiant, with
smooth, fair skin that glows under soft natural lighting. She has striking
emerald-green eyes, full of determination and warmth, framed by soft,
natural eyelashes. Her short, vibrant pink hair is textured realistically,
slightly tousled, and neatly cut just above her shoulders, with a few
strands gently catching the wind. She wears a modernized version of her
classic outfit: a sleeveless crimson-red top with subtle leather-like
texture and white detailing, paired with a sleek black skirt and fitted
black gloves. A metal headband with the Konoha symbol is prominently tied
around her forehead. Her muscular yet feminine arms and confident stance
reflect her strength and dedication. The background features a serene
Konoha village setting with cherry blossoms in full bloom, their soft petals
falling around her. The lighting is warm and natural, highlighting the
contours of her face and outfit. Subtle details, like the glint of her
headband and the texture of her clothing, enhance the realism while
maintaining her anime-inspired essence. A perfect blend of beauty, strength,
and Sakura’s iconic design --ar 9:16 --v 6.0
output:
url: images/example_duaqqxfne.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 3dstyle style
license: creativeml-openrail-m
---
# 3d-Anime-Style
<Gallery />
## Model description
# 3D Anime-Style Flux LoRA
## Model Overview
**Model Name:** 3D Anime-Style Flux LoRA
**Repository:** `FounderFeed/3dAnime-Style-flux-dev-lora`
**License:** CreativeML OpenRAIL-M
**Base Model:** `black-forest-labs/FLUX.1-dev`
This repository hosts a fine-tuned LoRA model, specialized in generating 3D anime-style images with the expressive style of FLUX.1. Leveraging the strengths of the original FLUX model, this LoRA is crafted for creators seeking a stylized, 3D anime aesthetic in their generated outputs.
## Description
This model was fine-tuned to focus on 3D anime-style visuals, optimized to produce rich, immersive images that balance detailed textures with a stylized, anime-inspired form. It’s suited for scenarios requiring a fusion of realistic shading with an anime flair, providing unique results in environments like anime content generation, game design, and digital art creation.
### Key Features
- **3D Anime Style:** Tailored for a 3D-rendered anime look, ideal for generating characters, environments, and scenes with enhanced depth and realism.
- **Trigger Words:** The model responds well to `3dstyle` and `style`, which can be used to prompt desired stylistic elements.
- **Compatibility:** This model is compatible with platforms that support `.safetensors` and `.ckpt` formats.
## Installation & Usage
1. **Upload Files:** To use this model, download the `.safetensors` file available in this repository, and place it within your local model folder.
2. **Loading the Model:** Load the model as a LoRA within any framework supporting the format. Ensure that the base model `black-forest-labs/FLUX.1-dev` is also available.
3. **Trigger Words:** Use the trigger words in your prompts to enhance the style and obtain a 3D anime effect.
### Example Usage
```
Prompt: "A 3D anime-style cityscape, intricate details, vivid colors, [3dstyle style]"
```
## Image Generation Examples
You can view example images generated using this model by uploading sample output images in `.jpg`, `.png`, or `.webp` formats to this repository.
## License
This model is provided under the CreativeML OpenRAIL-M license, which means it’s free to use for both commercial and non-commercial purposes, with proper credit to the original model creators.
## Disclaimer
Please note that while the model produces high-quality outputs, results may vary depending on prompt specificity and the intended level of detail.
## Trigger words
You should use `3dstyle style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/FounderFeed/3dAnime-Style-flux-dev-lora/tree/main) them in the Files & versions tab.
|
homeb82784/gemma-2-9b-it-v2.0
|
homeb82784
| 2024-11-15T11:49:30Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"unsloth",
"trl",
"sft",
"krx",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T05:48:58Z
|
---
library_name: transformers
tags:
- unsloth
- trl
- sft
- krx
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
research-dump/roberta-base_wikiquote_outcome_prediction_v1
|
research-dump
| 2024-11-15T11:47:26Z
| 109
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-12T10:42:17Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
win10/Llama-3-Taiwan-13.3B-Instruct
|
win10
| 2024-11-15T11:44:49Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:yentinglin/Llama-3-Taiwan-8B-Instruct",
"base_model:finetune:yentinglin/Llama-3-Taiwan-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T11:39:20Z
|
---
base_model:
- yentinglin/Llama-3-Taiwan-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [yentinglin/Llama-3-Taiwan-8B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 8]
model: yentinglin/Llama-3-Taiwan-8B-Instruct
- sources:
- layer_range: [4, 12]
model: yentinglin/Llama-3-Taiwan-8B-Instruct
- sources:
- layer_range: [8, 16]
model: yentinglin/Llama-3-Taiwan-8B-Instruct
- sources:
- layer_range: [12, 20]
model: yentinglin/Llama-3-Taiwan-8B-Instruct
- sources:
- layer_range: [16, 24]
model: yentinglin/Llama-3-Taiwan-8B-Instruct
- sources:
- layer_range: [20, 28]
model: yentinglin/Llama-3-Taiwan-8B-Instruct
- sources:
- layer_range: [24, 32]
model: yentinglin/Llama-3-Taiwan-8B-Instruct
```
|
RichardErkhov/mariavilla_-_gemma2-gguf
|
RichardErkhov
| 2024-11-15T11:44:23Z
| 8
| 0
| null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T10:27:53Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma2 - GGUF
- Model creator: https://huggingface.co/mariavilla/
- Original model: https://huggingface.co/mariavilla/gemma2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma2.Q2_K.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q2_K.gguf) | Q2_K | 1.08GB |
| [gemma2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [gemma2.Q3_K.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q3_K.gguf) | Q3_K | 1.29GB |
| [gemma2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [gemma2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [gemma2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [gemma2.Q4_0.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q4_0.gguf) | Q4_0 | 1.44GB |
| [gemma2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [gemma2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [gemma2.Q4_K.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q4_K.gguf) | Q4_K | 1.52GB |
| [gemma2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [gemma2.Q4_1.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q4_1.gguf) | Q4_1 | 1.56GB |
| [gemma2.Q5_0.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q5_0.gguf) | Q5_0 | 1.68GB |
| [gemma2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [gemma2.Q5_K.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q5_K.gguf) | Q5_K | 1.71GB |
| [gemma2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [gemma2.Q5_1.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q5_1.gguf) | Q5_1 | 1.79GB |
| [gemma2.Q6_K.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q6_K.gguf) | Q6_K | 1.92GB |
| [gemma2.Q8_0.gguf](https://huggingface.co/RichardErkhov/mariavilla_-_gemma2-gguf/blob/main/gemma2.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kapzo/demo-donut_extraction-v5
|
Kapzo
| 2024-11-15T11:43:58Z
| 13
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-15T06:04:20Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/ai4bharat_-_hercule-hi-gguf
|
RichardErkhov
| 2024-11-15T11:41:48Z
| 6
| 0
| null |
[
"gguf",
"arxiv:2410.13394",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T08:02:07Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
hercule-hi - GGUF
- Model creator: https://huggingface.co/ai4bharat/
- Original model: https://huggingface.co/ai4bharat/hercule-hi/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [hercule-hi.Q2_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q2_K.gguf) | Q2_K | 2.96GB |
| [hercule-hi.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [hercule-hi.Q3_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q3_K.gguf) | Q3_K | 3.74GB |
| [hercule-hi.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [hercule-hi.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [hercule-hi.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [hercule-hi.Q4_0.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_0.gguf) | Q4_0 | 4.34GB |
| [hercule-hi.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [hercule-hi.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [hercule-hi.Q4_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_K.gguf) | Q4_K | 4.58GB |
| [hercule-hi.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [hercule-hi.Q4_1.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_1.gguf) | Q4_1 | 4.78GB |
| [hercule-hi.Q5_0.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_0.gguf) | Q5_0 | 5.21GB |
| [hercule-hi.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [hercule-hi.Q5_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_K.gguf) | Q5_K | 5.34GB |
| [hercule-hi.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [hercule-hi.Q5_1.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_1.gguf) | Q5_1 | 5.65GB |
| [hercule-hi.Q6_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q6_K.gguf) | Q6_K | 6.14GB |
| [hercule-hi.Q8_0.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
license: mit
language:
- hi
metrics:
- pearsonr
- spearmanr
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
# Model Card for Hercule
Hercule is a cross-lingual evaluation model introduced as part of the CIA Suite to assess multilingual Large Language Models (LLMs). It addresses the challenge of evaluating multilingual LLMs by using English reference responses to score multilingual outputs.
Fine-tuned on the INTEL dataset, Hercule demonstrates better alignment with human judgments compared to zero-shot evaluations by proprietary models like GPT-4, on the RECON test set. It excels particularly in low-resource scenarios and supports zero-shot evaluations on unseen languages. The model employs reference-based evaluation, providing feedback and scores on a 1-5 scale, and highlights the effectiveness of lightweight fine-tuning methods (like LoRA) for efficient multilingual evaluation. All FFT models and LoRA weights are available [here](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
# Model Details
## Model Description
- **Model type:** Evaluator Language model
- **Language(s) (NLP):** Hindi
- **Related Models:** [Hercule Models](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2410.13394)
- [GitHub Repo](https://github.com/AI4Bharat/CIA)
Hercule in fine-tuned on [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using Intel training data and evaluated on Recon test set. Models for other languages are available in [CIA Suite](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
## Prompt Format
We’ve developed wrapper functions and classes to make it easy to work with Hercule. Check them out on our [github repository](https://github.com/AI4Bharat/CIA) – we highly recommend using them!
If you only need to use the model for your specific use case, please follow the prompt format provided below.
### Reference Guided Direct Assessment
The Hercule model expects four input components: an evaluation instruction (multilingual), a response to evaluate (multilingual), a scoring rubric (English), and a reference answer (English). Use the prompt format provided below, ensuring that you include the instruction, response, reference answer, evaluation criteria, and a detailed score rubric for each score from 1 to 5.
After running inference with HERCULE, the output will include feedback and a score, separated by the phrase ```[RESULT]```.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{instruction}
###Response to evaluate:
{response}
###Reference Answer (Score 5):
{reference_answer}
###Score Rubrics:
[{criteria}]
Score 1: {score1_rubric}
Score 2: {score2_rubric}
Score 3: {score3_rubric}
Score 4: {score4_rubric}
Score 5: {score5_rubric}
###Feedback:
```
We use the same evaluation prompt as used in [Prometheus 2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0).
## Links for Reference
- **Repository**: https://github.com/AI4Bharat/CIA
- **Paper**: https://arxiv.org/abs/2410.13394
- **Point of Contact**: [email protected], [email protected]
## License
Intel training data is created from [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) which is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@article{doddapaneni2024crosslingual,
title = {Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs},
author = {Sumanth Doddapaneni and Mohammed Safi Ur Rahman Khan and Dilip Venkatesh and Raj Dabre and Anoop Kunchukuttan and Mitesh M. Khapra},
year = {2024},
journal = {arXiv preprint arXiv: 2410.13394}
}
```
|
research-dump/bert-large-uncased_wikiquote_outcome_prediction_v1
|
research-dump
| 2024-11-15T11:36:59Z
| 107
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-11T22:54:25Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pixeldoggo/poca-SoccerTwos
|
pixeldoggo
| 2024-11-15T11:34:39Z
| 9
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-11-15T11:34:31Z
|
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pixeldoggo/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
wishwarrior/my-first-repo
|
wishwarrior
| 2024-11-15T11:33:31Z
| 190
| 0
|
transformers
|
[
"transformers",
"safetensors",
"resnet_check_001",
"image-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
image-classification
| 2024-11-15T11:10:19Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/aixcoder-7b-base-i1-GGUF
|
mradermacher
| 2024-11-15T11:32:09Z
| 33
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:aiXcoder/aixcoder-7b-base",
"base_model:quantized:aiXcoder/aixcoder-7b-base",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-15T10:10:49Z
|
---
base_model: aiXcoder/aixcoder-7b-base
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/aiXcoder/aixcoder-7b-base
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ2_M.gguf) | i1-IQ2_M | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.3 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.3 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.3 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF/resolve/main/aixcoder-7b-base.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/aixcoder-7b-base-GGUF
|
mradermacher
| 2024-11-15T11:32:09Z
| 14
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:aiXcoder/aixcoder-7b-base",
"base_model:quantized:aiXcoder/aixcoder-7b-base",
"endpoints_compatible",
"region:us"
] | null | 2024-11-13T00:55:34Z
|
---
base_model: aiXcoder/aixcoder-7b-base
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/aiXcoder/aixcoder-7b-base
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aixcoder-7b-base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q2_K.gguf) | Q2_K | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.3 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q5_K_M.gguf) | Q5_K_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.Q8_0.gguf) | Q8_0 | 8.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/aixcoder-7b-base-GGUF/resolve/main/aixcoder-7b-base.f16.gguf) | f16 | 15.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/starcoder2-15b-instruct-GGUF
|
mradermacher
| 2024-11-15T11:18:09Z
| 82
| 0
|
transformers
|
[
"transformers",
"gguf",
"code",
"starcoder2",
"en",
"base_model:TechxGenus/starcoder2-15b-instruct",
"base_model:quantized:TechxGenus/starcoder2-15b-instruct",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2024-11-15T10:43:49Z
|
---
base_model: TechxGenus/starcoder2-15b-instruct
language:
- en
library_name: transformers
license: bigcode-openrail-m
quantized_by: mradermacher
tags:
- code
- starcoder2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TechxGenus/starcoder2-15b-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/starcoder2-15b-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q2_K.gguf) | Q2_K | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q3_K_S.gguf) | Q3_K_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q3_K_M.gguf) | Q3_K_M | 8.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.IQ4_XS.gguf) | IQ4_XS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q3_K_L.gguf) | Q3_K_L | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 9.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q4_K_S.gguf) | Q4_K_S | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q4_K_M.gguf) | Q4_K_M | 10.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q5_K_S.gguf) | Q5_K_S | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q5_K_M.gguf) | Q5_K_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q6_K.gguf) | Q6_K | 13.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/starcoder2-15b-instruct-GGUF/resolve/main/starcoder2-15b-instruct.Q8_0.gguf) | Q8_0 | 17.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JuniperChinenye/d4
|
JuniperChinenye
| 2024-11-15T11:14:10Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T11:11:42Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ghaythfd/Llama3.1_8b_finetuned_revised_v1.1
|
Ghaythfd
| 2024-11-15T11:13:06Z
| 10
| 0
| null |
[
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T10:05:04Z
|
---
license: apache-2.0
---
|
Twipsy/vit-base-oxford-iiit-pets
|
Twipsy
| 2024-11-15T11:07:04Z
| 193
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-15T10:49:16Z
|
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1763
- Accuracy: 0.9499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3698 | 1.0 | 370 | 0.2753 | 0.9296 |
| 0.2212 | 2.0 | 740 | 0.2142 | 0.9378 |
| 0.1741 | 3.0 | 1110 | 0.1975 | 0.9432 |
| 0.1546 | 4.0 | 1480 | 0.1899 | 0.9432 |
| 0.1355 | 5.0 | 1850 | 0.1883 | 0.9472 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.2.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
genloop/fin-news-headline-gen-llama-3.2-1B-cpt-checkpoint
|
genloop
| 2024-11-15T10:57:09Z
| 96
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T10:55:53Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lemorra/Qwen2-VL
|
Lemorra
| 2024-11-15T10:47:44Z
| 14
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"multimodal",
"conversational",
"en",
"arxiv:2409.12191",
"arxiv:2308.12966",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-15T10:47:43Z
|
---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
library_name: transformers
---
# Qwen2-VL-7B-Instruct
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
## Evaluation
### Image Benchmarks
| Benchmark | InternVL2-8B | MiniCPM-V 2.6 | GPT-4o-mini | **Qwen2-VL-7B** |
| :--- | :---: | :---: | :---: | :---: |
| MMMU<sub>val</sub> | 51.8 | 49.8 | **60**| 54.1 |
| DocVQA<sub>test</sub> | 91.6 | 90.8 | - | **94.5** |
| InfoVQA<sub>test</sub> | 74.8 | - | - |**76.5** |
| ChartQA<sub>test</sub> | **83.3** | - |- | 83.0 |
| TextVQA<sub>val</sub> | 77.4 | 80.1 | -| **84.3** |
| OCRBench | 794 | **852** | 785 | 845 |
| MTVQA | - | - | -| **26.3** |
| VCR<sub>en easy</sub> | - | 73.88 | 83.60 | **89.70** |
| VCR<sub>zh easy</sub> | - | 10.18| 1.10 | **59.94** |
| RealWorldQA | 64.4 | - | - | **70.1** |
| MME<sub>sum</sub> | 2210.3 | **2348.4** | 2003.4| 2326.8 |
| MMBench-EN<sub>test</sub> | 81.7 | - | - | **83.0** |
| MMBench-CN<sub>test</sub> | **81.2** | - | - | 80.5 |
| MMBench-V1.1<sub>test</sub> | 79.4 | 78.0 | 76.0| **80.7** |
| MMT-Bench<sub>test</sub> | - | - | - |**63.7** |
| MMStar | **61.5** | 57.5 | 54.8 | 60.7 |
| MMVet<sub>GPT-4-Turbo</sub> | 54.2 | 60.0 | **66.9** | 62.0 |
| HallBench<sub>avg</sub> | 45.2 | 48.1 | 46.1| **50.6** |
| MathVista<sub>testmini</sub> | 58.3 | **60.6** | 52.4 | 58.2 |
| MathVision | - | - | - | **16.3** |
### Video Benchmarks
| Benchmark | Internvl2-8B | LLaVA-OneVision-7B | MiniCPM-V 2.6 | **Qwen2-VL-7B** |
| :--- | :---: | :---: | :---: | :---: |
| MVBench | 66.4 | 56.7 | - | **67.0** |
| PerceptionTest<sub>test</sub> | - | 57.1 | - | **62.3** |
| EgoSchema<sub>test</sub> | - | 60.1 | - | **66.7** |
| Video-MME<sub>wo/w subs</sub> | 54.0/56.9 | 58.2/- | 60.9/63.6 | **63.3**/**69.0** |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-7B-Instruct",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
|
mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF
|
mradermacher
| 2024-11-15T10:47:13Z
| 19
| 0
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:saishf/Fimbulvetr-Kuro-Lotus-10.7B",
"base_model:quantized:saishf/Fimbulvetr-Kuro-Lotus-10.7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-15T08:27:35Z
|
---
base_model: saishf/Fimbulvetr-Kuro-Lotus-10.7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/saishf/Fimbulvetr-Kuro-Lotus-10.7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 6.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 6.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fimbulvetr-Kuro-Lotus-10.7B-i1-GGUF/resolve/main/Fimbulvetr-Kuro-Lotus-10.7B.i1-Q6_K.gguf) | i1-Q6_K | 8.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Llama3.1-8B-ShiningValiant2-GGUF
|
mradermacher
| 2024-11-15T10:34:10Z
| 30
| 0
|
transformers
|
[
"transformers",
"gguf",
"shining-valiant",
"shining-valiant-2",
"valiant",
"valiant-labs",
"llama",
"llama-3.1",
"llama-3.1-instruct",
"llama-3.1-instruct-8b",
"llama-3",
"llama-3-instruct",
"llama-3-instruct-8b",
"8b",
"science",
"physics",
"biology",
"chemistry",
"compsci",
"computer-science",
"engineering",
"technical",
"conversational",
"chat",
"instruct",
"en",
"dataset:sequelbox/Celestia",
"dataset:sequelbox/Spurline",
"dataset:sequelbox/Supernova",
"base_model:ValiantLabs/Llama3.1-8B-ShiningValiant2",
"base_model:quantized:ValiantLabs/Llama3.1-8B-ShiningValiant2",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | null | 2024-11-15T10:16:16Z
|
---
base_model: ValiantLabs/Llama3.1-8B-ShiningValiant2
datasets:
- sequelbox/Celestia
- sequelbox/Spurline
- sequelbox/Supernova
language:
- en
library_name: transformers
license: llama3.1
model_type: llama
quantized_by: mradermacher
tags:
- shining-valiant
- shining-valiant-2
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- science
- physics
- biology
- chemistry
- compsci
- computer-science
- engineering
- technical
- conversational
- chat
- instruct
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-8B-ShiningValiant2-GGUF/resolve/main/Llama3.1-8B-ShiningValiant2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mwitjez/multilingual-clickbait-detector
|
mwitjez
| 2024-11-15T10:29:20Z
| 487
| 0
| null |
[
"tensorboard",
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-11-15T09:03:01Z
|
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: multilingual-clickbait-detector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual-clickbait-detector
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1283
- Accuracy: 0.9596
- F1: 0.9619
- Precision: 0.9581
- Recall: 0.9658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0659 | 1.0 | 3787 | 0.1147 | 0.9627 | 0.9650 | 0.9576 | 0.9726 |
| 0.0245 | 2.0 | 7574 | 0.1841 | 0.9637 | 0.9659 | 0.9588 | 0.9732 |
| 0.0115 | 3.0 | 11361 | 0.2095 | 0.9645 | 0.9665 | 0.9651 | 0.9678 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
lautel/MEG-Mistral-7B-Instruct-v0.3
|
lautel
| 2024-11-15T10:27:43Z
| 26
| 0
| null |
[
"safetensors",
"mistral",
"medical",
"instruction-tuned",
"question-answering",
"en",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2024-11-12T15:58:40Z
|
---
license: apache-2.0
language:
- en
pipeline_tag: question-answering
tags:
- mistral
- medical
- instruction-tuned
---
You can find further details at out GitHub repo: https://github.com/lautel/MEG
|
lombardata/drone-DinoVdeau-from-probs-large-2024_11_15-batch-size32_freeze_probs
|
lombardata
| 2024-11-15T10:12:38Z
| 42
| 0
| null |
[
"tensorboard",
"safetensors",
"dinov2",
"multilabel-image-classification",
"multilabel",
"generated_from_trainer",
"eng",
"doi:10.57967/hf/4022",
"license:cc0-1.0",
"region:us"
] | null | 2024-11-15T04:45:25Z
|
---
language:
- eng
license: cc0-1.0
tags:
- multilabel-image-classification
- multilabel
- generated_from_trainer
base_model: drone-DinoVdeau-from-probs-large-2024_11_15-batch-size32_freeze_probs
model-index:
- name: drone-DinoVdeau-from-probs-large-2024_11_15-batch-size32_freeze_probs
results: []
---
drone-DinoVdeau-from-probs is a fine-tuned version of [drone-DinoVdeau-from-probs-large-2024_11_15-batch-size32_freeze_probs](https://huggingface.co/drone-DinoVdeau-from-probs-large-2024_11_15-batch-size32_freeze_probs). It achieves the following results on the test set:
- Loss: 0.4668
- RMSE: 0.1546
- MAE: 0.1143
- KL Divergence: 0.3931
---
# Model description
drone-DinoVdeau-from-probs is a model built on top of drone-DinoVdeau-from-probs-large-2024_11_15-batch-size32_freeze_probs model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
The source code for training the model can be found in this [Git repository](https://github.com/SeatizenDOI/DinoVdeau).
- **Developed by:** [lombardata](https://huggingface.co/lombardata), credits to [César Leblanc](https://huggingface.co/CesarLeblanc) and [Victor Illien](https://huggingface.co/groderg)
---
# Intended uses & limitations
You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
---
# Training and evaluation data
Details on the estimated number of images for each class are given in the following table:
| Class | train | test | val | Total |
|:------------------------|--------:|-------:|------:|--------:|
| Acropore_branched | 1220 | 363 | 362 | 1945 |
| Acropore_digitised | 586 | 195 | 189 | 970 |
| Acropore_tabular | 308 | 133 | 119 | 560 |
| Algae | 4777 | 1372 | 1384 | 7533 |
| Dead_coral | 2513 | 671 | 693 | 3877 |
| Millepore | 136 | 55 | 59 | 250 |
| No_acropore_encrusting | 252 | 88 | 93 | 433 |
| No_acropore_massive | 2158 | 725 | 726 | 3609 |
| No_acropore_sub_massive | 2036 | 582 | 612 | 3230 |
| Rock | 5976 | 1941 | 1928 | 9845 |
| Rubble | 4851 | 1486 | 1474 | 7811 |
| Sand | 6155 | 2019 | 1990 | 10164 |
---
# Training procedure
## Training hyperparameters
The following hyperparameters were used during training:
- **Number of Epochs**: 83.0
- **Learning Rate**: 0.001
- **Train Batch Size**: 32
- **Eval Batch Size**: 32
- **Optimizer**: Adam
- **LR Scheduler Type**: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
- **Freeze Encoder**: Yes
- **Data Augmentation**: Yes
## Data Augmentation
Data were augmented using the following transformations :
Train Transforms
- **PreProcess**: No additional parameters
- **Resize**: probability=1.00
- **RandomHorizontalFlip**: probability=0.25
- **RandomVerticalFlip**: probability=0.25
- **ColorJiggle**: probability=0.25
- **RandomPerspective**: probability=0.25
- **Normalize**: probability=1.00
Val Transforms
- **PreProcess**: No additional parameters
- **Resize**: probability=1.00
- **Normalize**: probability=1.00
## Training results
Epoch | Validation Loss | MAE | RMSE | KL div | Learning Rate
--- | --- | --- | --- | --- | ---
1 | 0.4855400025844574 | 0.1364 | 0.1771 | 0.3101 | 0.001
2 | 0.47601452469825745 | 0.1247 | 0.1688 | 0.5077 | 0.001
3 | 0.4776814579963684 | 0.1230 | 0.1707 | 0.7896 | 0.001
4 | 0.47429159283638 | 0.1238 | 0.1672 | 0.4932 | 0.001
5 | 0.47457176446914673 | 0.1277 | 0.1669 | 0.2901 | 0.001
6 | 0.4749792814254761 | 0.1253 | 0.1674 | 0.4399 | 0.001
7 | 0.4744807779788971 | 0.1259 | 0.1671 | 0.4868 | 0.001
8 | 0.47424906492233276 | 0.1257 | 0.1672 | 0.3241 | 0.001
9 | 0.4729686379432678 | 0.1236 | 0.1658 | 0.4560 | 0.001
10 | 0.4750550389289856 | 0.1269 | 0.1679 | 0.2141 | 0.001
11 | 0.4733181595802307 | 0.1265 | 0.1663 | 0.2530 | 0.001
12 | 0.4758349061012268 | 0.1264 | 0.1684 | 0.3966 | 0.001
13 | 0.4722050428390503 | 0.1223 | 0.1650 | 0.6055 | 0.001
14 | 0.4747372567653656 | 0.1250 | 0.1666 | 0.4203 | 0.001
15 | 0.47325292229652405 | 0.1227 | 0.1662 | 0.6553 | 0.001
16 | 0.4734710156917572 | 0.1241 | 0.1656 | 0.3576 | 0.001
17 | 0.4721581041812897 | 0.1221 | 0.1643 | 0.4545 | 0.001
18 | 0.4723944365978241 | 0.1225 | 0.1647 | 0.4902 | 0.001
19 | 0.47289156913757324 | 0.1261 | 0.1650 | 0.3158 | 0.001
20 | 0.4697262644767761 | 0.1203 | 0.1623 | 0.4574 | 0.0001
21 | 0.46890661120414734 | 0.1197 | 0.1613 | 0.4569 | 0.0001
22 | 0.46905258297920227 | 0.1202 | 0.1617 | 0.4535 | 0.0001
23 | 0.4691086411476135 | 0.1210 | 0.1614 | 0.2971 | 0.0001
24 | 0.46915334463119507 | 0.1196 | 0.1616 | 0.3916 | 0.0001
25 | 0.4676876664161682 | 0.1181 | 0.1601 | 0.4516 | 0.0001
26 | 0.4679708480834961 | 0.1171 | 0.1605 | 0.6089 | 0.0001
27 | 0.4674595892429352 | 0.1182 | 0.1600 | 0.4741 | 0.0001
28 | 0.46810340881347656 | 0.1200 | 0.1606 | 0.3356 | 0.0001
29 | 0.4678303897380829 | 0.1181 | 0.1603 | 0.4330 | 0.0001
30 | 0.46800243854522705 | 0.1194 | 0.1602 | 0.3160 | 0.0001
31 | 0.4676785469055176 | 0.1179 | 0.1600 | 0.4190 | 0.0001
32 | 0.46752873063087463 | 0.1188 | 0.1598 | 0.3706 | 0.0001
33 | 0.46710190176963806 | 0.1181 | 0.1593 | 0.3504 | 0.0001
34 | 0.4670344293117523 | 0.1180 | 0.1594 | 0.3881 | 0.0001
35 | 0.4662601053714752 | 0.1166 | 0.1587 | 0.4398 | 0.0001
36 | 0.46657058596611023 | 0.1170 | 0.1587 | 0.4382 | 0.0001
37 | 0.4657588005065918 | 0.1163 | 0.1581 | 0.4330 | 0.0001
38 | 0.4659184217453003 | 0.1162 | 0.1583 | 0.4878 | 0.0001
39 | 0.46703553199768066 | 0.1178 | 0.1595 | 0.3791 | 0.0001
40 | 0.4664987027645111 | 0.1178 | 0.1588 | 0.3889 | 0.0001
41 | 0.46659526228904724 | 0.1184 | 0.1589 | 0.3222 | 0.0001
42 | 0.4655005633831024 | 0.1164 | 0.1579 | 0.4262 | 0.0001
43 | 0.4656265676021576 | 0.1162 | 0.1579 | 0.4611 | 0.0001
44 | 0.4655725955963135 | 0.1164 | 0.1580 | 0.4586 | 0.0001
45 | 0.46600833535194397 | 0.1158 | 0.1583 | 0.4368 | 0.0001
46 | 0.4660418927669525 | 0.1164 | 0.1582 | 0.4118 | 0.0001
47 | 0.46521857380867004 | 0.1154 | 0.1577 | 0.5424 | 0.0001
48 | 0.46598610281944275 | 0.1160 | 0.1586 | 0.5251 | 0.0001
49 | 0.46604350209236145 | 0.1161 | 0.1585 | 0.5007 | 0.0001
50 | 0.46660009026527405 | 0.1185 | 0.1586 | 0.2424 | 0.0001
51 | 0.4660661220550537 | 0.1162 | 0.1584 | 0.4171 | 0.0001
52 | 0.4649689793586731 | 0.1155 | 0.1575 | 0.4912 | 0.0001
53 | 0.4653578996658325 | 0.1169 | 0.1578 | 0.4030 | 0.0001
54 | 0.4660585820674896 | 0.1153 | 0.1585 | 0.4811 | 0.0001
55 | 0.46527624130249023 | 0.1167 | 0.1576 | 0.3774 | 0.0001
56 | 0.4654240906238556 | 0.1176 | 0.1575 | 0.3254 | 0.0001
57 | 0.4654492139816284 | 0.1162 | 0.1575 | 0.3649 | 0.0001
58 | 0.46654412150382996 | 0.1166 | 0.1584 | 0.4075 | 0.0001
59 | 0.465238481760025 | 0.1157 | 0.1575 | 0.4202 | 1e-05
60 | 0.46530231833457947 | 0.1157 | 0.1571 | 0.4084 | 1e-05
61 | 0.4653523564338684 | 0.1153 | 0.1573 | 0.4497 | 1e-05
62 | 0.46477487683296204 | 0.1153 | 0.1568 | 0.4112 | 1e-05
63 | 0.46481335163116455 | 0.1152 | 0.1567 | 0.3748 | 1e-05
64 | 0.46523070335388184 | 0.1162 | 0.1571 | 0.3044 | 1e-05
65 | 0.46484872698783875 | 0.1153 | 0.1569 | 0.4685 | 1e-05
66 | 0.46500927209854126 | 0.1148 | 0.1573 | 0.5087 | 1e-05
67 | 0.4645930230617523 | 0.1155 | 0.1568 | 0.4274 | 1e-05
68 | 0.46456360816955566 | 0.1144 | 0.1566 | 0.4969 | 1e-05
69 | 0.464430034160614 | 0.1145 | 0.1564 | 0.4480 | 1e-05
70 | 0.4648461937904358 | 0.1150 | 0.1567 | 0.4291 | 1e-05
71 | 0.4645022749900818 | 0.1156 | 0.1565 | 0.3797 | 1e-05
72 | 0.46473589539527893 | 0.1150 | 0.1569 | 0.4280 | 1e-05
73 | 0.46414923667907715 | 0.1142 | 0.1563 | 0.4592 | 1e-05
74 | 0.4641610085964203 | 0.1151 | 0.1564 | 0.4321 | 1e-05
75 | 0.4644509255886078 | 0.1152 | 0.1565 | 0.3843 | 1e-05
76 | 0.4646488130092621 | 0.1147 | 0.1569 | 0.5216 | 1e-05
77 | 0.46475714445114136 | 0.1152 | 0.1569 | 0.4094 | 1e-05
78 | 0.46428272128105164 | 0.1149 | 0.1564 | 0.4399 | 1e-05
79 | 0.4645934998989105 | 0.1147 | 0.1567 | 0.4178 | 1e-05
80 | 0.46436014771461487 | 0.1150 | 0.1564 | 0.4373 | 1.0000000000000002e-06
81 | 0.46448636054992676 | 0.1151 | 0.1567 | 0.4701 | 1.0000000000000002e-06
82 | 0.4644375145435333 | 0.1146 | 0.1565 | 0.4601 | 1.0000000000000002e-06
83 | 0.46457409858703613 | 0.1147 | 0.1567 | 0.4511 | 1.0000000000000002e-06
---
# Framework Versions
- **Transformers**: 4.41.0
- **Pytorch**: 2.5.0+cu124
- **Datasets**: 3.0.2
- **Tokenizers**: 0.19.1
|
Ajayk/Truviz-ai-detect-2
|
Ajayk
| 2024-11-15T10:12:33Z
| 105
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:Ajayk/Truviz-ai-detect",
"base_model:finetune:Ajayk/Truviz-ai-detect",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-15T08:45:28Z
|
---
library_name: transformers
base_model: Ajayk/Truviz-ai-detect
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Truviz-ai-detect-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Truviz-ai-detect-2
This model is a fine-tuned version of [Ajayk/Truviz-ai-detect](https://huggingface.co/Ajayk/Truviz-ai-detect) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2677
- Accuracy: 0.9423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3847 | 0.1 | 500 | 0.2306 | 0.9071 |
| 0.2661 | 0.2 | 1000 | 0.4132 | 0.8855 |
| 0.2539 | 0.3 | 1500 | 0.2856 | 0.9146 |
| 0.2548 | 0.4 | 2000 | 0.2069 | 0.9295 |
| 0.1454 | 0.5 | 2500 | 0.3659 | 0.9212 |
| 0.2236 | 0.6 | 3000 | 0.2453 | 0.9344 |
| 0.2285 | 0.7 | 3500 | 0.1480 | 0.9497 |
| 0.2007 | 0.8 | 4000 | 0.2612 | 0.9229 |
| 0.2503 | 0.9 | 4500 | 0.2008 | 0.9384 |
| 0.2128 | 1.0 | 5000 | 0.1633 | 0.953 |
| 0.0849 | 1.1 | 5500 | 0.2167 | 0.9538 |
| 0.0706 | 1.2 | 6000 | 0.3862 | 0.9347 |
| 0.0915 | 1.3 | 6500 | 0.2781 | 0.9487 |
| 0.1187 | 1.4 | 7000 | 0.2677 | 0.9423 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Beka-pika/mms_kaz_tts_angry
|
Beka-pika
| 2024-11-15T10:11:48Z
| 105
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-07T18:00:24Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KocLab-Bilkent/BERTurk-Legal
|
KocLab-Bilkent
| 2024-11-15T10:08:59Z
| 459
| 4
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"legal",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-04-16T14:41:21Z
|
---
license: mit
language:
- tr
metrics:
- f1
- precision
- recall
tags:
- legal
---
We introduce BERTurk-Legal which is a transformer-based language model to retrieve prior legal cases. BERTurk-Legal is pre-trained on a dataset from the Turkish legal domain. This dataset does not contain any labels related to the prior court case retrieval task. Masked language modeling is used to train BERTurk-Legal in a self-supervised manner. With zero-shot classification, BERTurk-Legal provides state-of-the-art results on the dataset consisting of legal cases of the Court of Cassation of Turkey. The results of the experiments show the necessity of developing language models specific to the Turkish law domain. Details of BERTurk-Legal can be found in the paper mentioned in the Citation section below.
Test dataset can be accessed from the following link: https://github.com/koc-lab/yargitay_retrieval_dataset
The model can be loaded and used to create document embeddings as follows. Then, the document embeddings can be utilized for retrieval.
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
bert_model = "KocLab-Bilkent/BERTurk-Legal"
model = AutoModelForSequenceClassification.from_pretrained(bert_model, output_hidden_states=True)
tokenizer = AutoTokenizer.from_pretrained(bert_model)
tokens = tokenizer("Örnek metin") # a dummy text is provided as input
output = model(tokens)
docEmbeddings = output.hidden_states[-1]
```
## Citation
If you use the model, please cite the following conference paper.
```
@inproceedings{ozturk23berturkLegal,
author={\"{O}zt\"{u}rk, Ceyhun E. and \"{O}z\c{c}elik, {\c{S}}. Bar{\i}\c{s} and Aykut Ko\c{c}},
booktitle={2023 31st Signal Processing and Communications Applications Conference (SIU)},
title={{A Transformer-Based Prior Legal Case Retrieval Method}},
year={2023},
volume={},
number={},
pages={1-4}
}
@mastersthesis{ozturk23legalNlp,
author = "\"{O}zt\"{u}rk, Ceyhun E.",
title = "Retrieving Turkish Prior Legal Cases with Deep Learning",
school = "Bilkent University",
year = "2023"
}
```
|
arthurhzna/56class_rokok
|
arthurhzna
| 2024-11-15T10:08:49Z
| 5
| 0
| null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-11-15T10:08:37Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: 56class_rokok
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# 56class_rokok
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### 76 DJarum

#### 76 Madu Hitam

#### 76 Mangga

#### 76 Nanas

#### Camel Intense Blue

#### Camel Option Yellow

#### Camel Purple Mint

#### Camel White

#### Camel Yellow

#### Chief Blue

#### Class Mild

#### Diplomat Evo

#### Diplomat Mild

#### Diplomat Mild Menthol

#### Djarum Black

#### Djarum Black Cappucino

#### Djarum Fresh Cola

#### Djarum King Filter

#### Djarum Super

#### Djarum Super Espresso

#### Djarum Super Mld Black

#### Djarum Super Mld Putih

#### Dunhill Blue Light Tabacco

#### Dunhill Mild

#### Forte Extra Breeze Menthol

#### Forte Manggo

#### Forte Mentol

#### Forte Original

#### Forte Vanilla

#### Garam De Luxe

#### Geo Mild

#### Gudang Garam Djaja

#### Gudang Garam GG Shiver

#### Gudang Garam Internasional

#### Gudang Garam Merah King Size

#### Gudang Garam Merah Tanpa King Size

#### Gudang Garam Signature

#### Gudang Garam Signature Mild

#### Gudang Garam Surya Coklat

#### Gudang Garam Surya Merah

#### Halim Merah

#### LA Bold

#### LA Ice

#### LA Ice Manggo Boost

#### LA Ice Purple Boost

#### LA Light

#### LA Menthol

#### Lucky Strike Cool

#### Lucky Strike Purple Boost

#### Lucky Strike Red

#### Raptor

#### Surya Exclusive

#### Surya Nusantara

#### Surya Pro Merah

#### Surya Pro Mild Limited Edition

#### Ziga Blue

|
minhdang/qwen2b-OCR
|
minhdang
| 2024-11-15T09:56:12Z
| 63
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"trl",
"sft",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-15T09:54:28Z
|
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PotatoB/Llama_evo_1_3
|
PotatoB
| 2024-11-15T09:51:30Z
| 5
| 0
| null |
[
"safetensors",
"llama",
"merge",
"mergekit",
"Shitao/llama2-winogrande",
"joyfine/llama2-7b-fine-tuning_TruthfulQA_20",
"license:apache-2.0",
"region:us"
] | null | 2024-11-15T09:48:56Z
|
---
license: apache-2.0
tags:
- merge
- mergekit
- Shitao/llama2-winogrande
- joyfine/llama2-7b-fine-tuning_TruthfulQA_20
---
# Llama_evo_1_3
Llama_evo_1_3 is a merged model generated for Model Kinship experiments, originating from meta-llama/Llama-2-7b-hf
* [Shitao/llama2-winogrande](https://huggingface.co/Shitao/llama2-winogrande)
* [joyfine/llama2-7b-fine-tuning_TruthfulQA_20](https://huggingface.co/joyfine/llama2-7b-fine-tuning_TruthfulQA_20)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Shitao/llama2-winogrande
layer_range: [0, 32]
- model: joyfine/llama2-7b-fine-tuning_TruthfulQA_20
layer_range: [0, 32]
merge_method: slerp
base_model: Shitao/llama2-winogrande
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
|
theprint/VanRossum-Qwen2.5-Coder-3B
|
theprint
| 2024-11-15T09:48:16Z
| 101
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"dataset:theprint/VanRossum-Alpaca",
"base_model:unsloth/Qwen2.5-Coder-3B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Qwen2.5-Coder-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T09:08:04Z
|
---
base_model: unsloth/Qwen2.5-Coder-3B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
datasets:
- theprint/VanRossum-Alpaca
---
# Homage to Python
This model has been trained for **1 epoch** on the VanRossum dataset.
The VanRossum dataset is all Python! I used [DataMix](https://github.com/theprint/DataMix) to combine a handful of highly rated Python-centric datasets, to get a sampling of each and create something new.
This data set has **80,000 entries** and is named after [**Guido Van Rossum**](https://en.wikipedia.org/wiki/Guido_van_Rossum), the man who invented Python back in 1991.
See the [VanRossum Collection](https://huggingface.co/collections/theprint/vanrossum-67363abb2d3459644d7fd102) on HF for all things related to this dataset.
## Alpaca / GPT
There are 2 versions of this dataset available on Huggingface.
- [VanRossum-GPT](https://huggingface.co/datasets/theprint/VanRossum-GPT)
- [VanRossum-Alpaca](https://huggingface.co/datasets/theprint/VanRossum-Alpaca)
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-3B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IloveThighs/speecht5_finetuned_nono
|
IloveThighs
| 2024-11-15T09:47:39Z
| 75
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-15T09:37:47Z
|
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_nono
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_nono
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.6064 | 16.6667 | 100 | 0.5937 |
| 0.5045 | 33.3333 | 200 | 0.5903 |
| 0.481 | 50.0 | 300 | 0.5985 |
| 0.4639 | 66.6667 | 400 | 0.5867 |
| 0.4483 | 83.3333 | 500 | 0.6025 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.0
|
prijak/TS1.0.1
|
prijak
| 2024-11-15T09:43:54Z
| 9
| 0
|
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T09:42:12Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** prijak
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
braindao/iq-code-evmind-0.5b-instruct-v0.2411.3
|
braindao
| 2024-11-15T09:37:41Z
| 128
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T09:37:12Z
|
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF
|
mradermacher
| 2024-11-15T09:34:11Z
| 80
| 0
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Joseph717171/Llama-3.1-SuperNova-Lite-14B",
"base_model:quantized:Joseph717171/Llama-3.1-SuperNova-Lite-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T08:29:10Z
|
---
base_model: Joseph717171/Llama-3.1-SuperNova-Lite-14B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Joseph717171/Llama-3.1-SuperNova-Lite-14B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q2_K.gguf) | Q2_K | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.IQ4_XS.gguf) | IQ4_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q4_K_S.gguf) | Q4_K_S | 7.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-14B-GGUF/resolve/main/Llama-3.1-SuperNova-Lite-14B.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Nablaaa/ppo-SnowballTarget
|
Nablaaa
| 2024-11-15T09:31:19Z
| 12
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-11-15T08:30:02Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Nablaaa/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
charisgao/fine_tuned_main_raid_cleaned_poetry
|
charisgao
| 2024-11-15T09:23:57Z
| 107
| 0
|
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-15T09:22:35Z
|
---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine_tuned_main_raid_cleaned_poetry
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_main_raid_cleaned_poetry
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0628
- Accuracy: 0.9905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4396 | 0.0767 | 100 | 0.4779 | 0.8612 |
| 0.2322 | 0.1534 | 200 | 0.2148 | 0.9414 |
| 0.2867 | 0.2301 | 300 | 0.2022 | 0.9603 |
| 0.2758 | 0.3067 | 400 | 0.1828 | 0.9552 |
| 0.1543 | 0.3834 | 500 | 0.5250 | 0.9155 |
| 0.2348 | 0.4601 | 600 | 0.1141 | 0.9733 |
| 0.163 | 0.5368 | 700 | 0.1417 | 0.9733 |
| 0.1622 | 0.6135 | 800 | 0.0898 | 0.9810 |
| 0.174 | 0.6902 | 900 | 0.1013 | 0.9810 |
| 0.1398 | 0.7669 | 1000 | 0.3111 | 0.9241 |
| 0.1247 | 0.8436 | 1100 | 0.1722 | 0.9655 |
| 0.1559 | 0.9202 | 1200 | 0.2461 | 0.9629 |
| 0.0987 | 0.9969 | 1300 | 0.1538 | 0.9741 |
| 0.0431 | 1.0736 | 1400 | 0.1137 | 0.9828 |
| 0.0572 | 1.1503 | 1500 | 0.1094 | 0.9845 |
| 0.0509 | 1.2270 | 1600 | 0.1153 | 0.9836 |
| 0.0579 | 1.3037 | 1700 | 0.0736 | 0.9879 |
| 0.0773 | 1.3804 | 1800 | 0.1087 | 0.9802 |
| 0.062 | 1.4571 | 1900 | 0.0890 | 0.9853 |
| 0.0621 | 1.5337 | 2000 | 0.1404 | 0.9793 |
| 0.0324 | 1.6104 | 2100 | 0.0669 | 0.9888 |
| 0.0548 | 1.6871 | 2200 | 0.1057 | 0.9836 |
| 0.0201 | 1.7638 | 2300 | 0.0920 | 0.9853 |
| 0.0614 | 1.8405 | 2400 | 0.0696 | 0.9897 |
| 0.0312 | 1.9172 | 2500 | 0.0628 | 0.9905 |
| 0.0132 | 1.9939 | 2600 | 0.0976 | 0.9853 |
| 0.0108 | 2.0706 | 2700 | 0.0670 | 0.9914 |
| 0.0 | 2.1472 | 2800 | 0.1647 | 0.9802 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
FrancescoBuda/Llama-ICD-coder-3B-merged-2ep
|
FrancescoBuda
| 2024-11-15T09:22:10Z
| 127
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T09:19:15Z
|
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** FrancescoBuda
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
umn-cyber/indobert-hoax
|
umn-cyber
| 2024-11-15T09:12:16Z
| 188
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-14T16:11:26Z
|
---
library_name: transformers
license: mit
base_model: indobenchmark/indobert-base-p1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: indobert-hoax-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-hoax-detection
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0556
- Accuracy: 0.9831
- F1: 0.9823
- Precision: 0.9781
- Recall: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0797 | 1.0 | 739 | 0.0485 | 0.9882 | 0.9876 | 0.9858 | 0.9893 |
| 0.0428 | 2.0 | 1478 | 0.0436 | 0.9868 | 0.9862 | 0.9817 | 0.9908 |
| 0.0221 | 3.0 | 2217 | 0.0480 | 0.9885 | 0.9879 | 0.9879 | 0.9879 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1
- Datasets 2.19.2
- Tokenizers 0.20.1
|
rhlsinghal1s/german-multilingual-e5-small
|
rhlsinghal1s
| 2024-11-15T09:11:34Z
| 3,652
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"passage-retrieval",
"sentence-similarity",
"pruned",
"de",
"base_model:intfloat/multilingual-e5-small",
"base_model:quantized:intfloat/multilingual-e5-small",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-15T09:11:32Z
|
---
pipeline_tag: sentence-similarity
language: de
license: mit
tags:
- passage-retrieval
- sentence-similarity
- pruned
library_name: sentence-transformers
base_model: intfloat/multilingual-e5-small
base_model_relation: quantized
---
# 🇩🇪 german-multilingual-e5-small
This model is a 66.0% smaller version of [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)
for the German language, created using the [mtem-pruner](https://huggingface.co/spaces/antoinelouis/mtem-pruner) space.
This pruned model should perform similarly to the original model for German language tasks with a much smaller
memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not
commonly used in German were removed from the original multilingual model's vocabulary.
## Usage
You can use this model with the Transformers library:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "rhlsinghal1s/german-multilingual-e5-small"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
```
Or with the sentence-transformers library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("rhlsinghal1s/german-multilingual-e5-small")
```
**Credits**: cc [@antoinelouis](https://huggingface.co/antoinelouis)
|
Lixiaokun030106/mrpc-bert-base-uncased
|
Lixiaokun030106
| 2024-11-15T09:10:25Z
| 105
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-15T09:07:37Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bunnycore/SmolLM2-1.7-Persona-Q5_K_M-GGUF
|
bunnycore
| 2024-11-15T09:08:43Z
| 5
| 0
| null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/SmolLM2-1.7-Persona",
"base_model:quantized:bunnycore/SmolLM2-1.7-Persona",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-15T09:08:34Z
|
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- llama-cpp
- gguf-my-repo
base_model: bunnycore/SmolLM2-1.7-Persona
---
# bunnycore/SmolLM2-1.7-Persona-Q5_K_M-GGUF
This model was converted to GGUF format from [`bunnycore/SmolLM2-1.7-Persona`](https://huggingface.co/bunnycore/SmolLM2-1.7-Persona) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/SmolLM2-1.7-Persona) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo bunnycore/SmolLM2-1.7-Persona-Q5_K_M-GGUF --hf-file smollm2-1.7-persona-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo bunnycore/SmolLM2-1.7-Persona-Q5_K_M-GGUF --hf-file smollm2-1.7-persona-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo bunnycore/SmolLM2-1.7-Persona-Q5_K_M-GGUF --hf-file smollm2-1.7-persona-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo bunnycore/SmolLM2-1.7-Persona-Q5_K_M-GGUF --hf-file smollm2-1.7-persona-q5_k_m-imat.gguf -c 2048
```
|
Justin-lee/Llama-3.1-8B-bnb-4bit-wenyanwen
|
Justin-lee
| 2024-11-15T09:04:44Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T08:32:07Z
|
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Justin-lee
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
axmx1/finetuning-sentiment-model-3000-samples_1
|
axmx1
| 2024-11-15T09:03:13Z
| 105
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-15T08:51:57Z
|
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-3000-samples_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3252
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mradermacher/magnum-v2-4b-i1-GGUF
|
mradermacher
| 2024-11-15T08:38:07Z
| 410
| 1
|
transformers
|
[
"transformers",
"gguf",
"chat",
"en",
"base_model:anthracite-org/magnum-v2-4b",
"base_model:quantized:anthracite-org/magnum-v2-4b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-15T06:52:15Z
|
---
base_model: anthracite-org/magnum-v2-4b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/anthracite-org/magnum-v2-4b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/magnum-v2-4b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ2_S.gguf) | i1-IQ2_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ2_M.gguf) | i1-IQ2_M | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q2_K.gguf) | i1-Q2_K | 1.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ3_S.gguf) | i1-IQ3_S | 2.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ3_M.gguf) | i1-IQ3_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.7 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.7 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q4_0.gguf) | i1-Q4_0 | 2.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-4b-i1-GGUF/resolve/main/magnum-v2-4b.i1-Q6_K.gguf) | i1-Q6_K | 3.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q4_0-gguf
|
Supa-AI
| 2024-11-15T08:33:03Z
| 9
| 0
| null |
[
"gguf",
"llama-cpp",
"en",
"id",
"jv",
"su",
"base_model:GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct",
"base_model:quantized:GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T08:30:55Z
|
---
base_model: GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
language:
- en
- id
- jv
- su
license: llama3
tags:
- llama-cpp
- gguf
---
# Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q4_0-gguf
This model was converted to GGUF format from [`GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct`](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) using llama.cpp.
Refer to the [original model card](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) for more details on the model.
## Use with llama.cpp
### CLI:
```bash
llama-cli --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q4_0-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q4_0.gguf -p "Your prompt here"
```
### Server:
```bash
llama-server --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q4_0-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q4_0.gguf -c 2048
```
## Model Details
- **Quantization Type:** q4_0
- **Original Model:** [GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct)
- **Format:** GGUF
|
prijak/TS_1
|
prijak
| 2024-11-15T08:32:05Z
| 62
| 0
|
transformers
|
[
"transformers",
"safetensors",
"Llama",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-14T19:25:51Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** prijak
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF
|
mradermacher
| 2024-11-15T08:31:12Z
| 21
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:macadeliccc/SOLAR-10.7B-Instruct-v1.0-laser",
"base_model:quantized:macadeliccc/SOLAR-10.7B-Instruct-v1.0-laser",
"license:cc-by-nc-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-13T00:40:19Z
|
---
base_model: macadeliccc/SOLAR-10.7B-Instruct-v1.0-laser
language:
- en
library_name: transformers
license: cc-by-nc-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/macadeliccc/SOLAR-10.7B-Instruct-v1.0-laser
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q4_0_4_4.gguf) | Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-Instruct-v1.0-laser-GGUF/resolve/main/SOLAR-10.7B-Instruct-v1.0-laser.f16.gguf) | f16 | 21.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
curlyfu/blip2-QA-generation
|
curlyfu
| 2024-11-15T08:25:45Z
| 5
| 2
|
peft
|
[
"peft",
"safetensors",
"image-to-text",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2024-05-02T17:18:52Z
|
---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
license: apache-2.0
pipeline_tag: image-to-text
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Lora for Blip2 to generate QAs from a picture.
## Infertece Demo
```python
from datasets import load_dataset
from peft import PeftModel
import torch
from transformers import AutoProcessor, Blip2ForConditionalGeneration
# prepare the model
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("ybelkada/blip2-opt-2.7b-fp16-sharded", device_map="auto", load_in_8bit=True)
model = PeftModel.from_pretrained(model, "curlyfu/blip2-OCR-QA-generation")
# prepare inputs
dataset = load_dataset("howard-hou/OCR-VQA", split="test")
example = dataset[10]
image = example["image"]
inputs = processor(images=image, return_tensors="pt").to("cuda", torch.float16)
pixel_values = inputs.pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=100)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_caption)
```
## Thanks
[huggingface/notebooks](!https://github.com/huggingface/notebooks)
|
binisha/speecht5_finetune_binisha
|
binisha
| 2024-11-15T08:24:04Z
| 7
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-11T07:34:12Z
|
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetune_binisha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetune_binisha
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.6028 | 2.7586 | 100 | 0.5187 |
| 0.5195 | 5.5172 | 200 | 0.4851 |
| 0.5075 | 8.2759 | 300 | 0.4708 |
| 0.462 | 11.0345 | 400 | 0.4609 |
| 0.4429 | 13.7931 | 500 | 0.4294 |
| 0.4303 | 16.5517 | 600 | 0.4249 |
| 0.4172 | 19.3103 | 700 | 0.4184 |
| 0.402 | 22.0690 | 800 | 0.4077 |
| 0.3898 | 24.8276 | 900 | 0.3975 |
| 0.3966 | 27.5862 | 1000 | 0.4197 |
| 0.3773 | 30.3448 | 1100 | 0.3955 |
| 0.3658 | 33.1034 | 1200 | 0.3878 |
| 0.3644 | 35.8621 | 1300 | 0.3878 |
| 0.3622 | 38.6207 | 1400 | 0.3841 |
| 0.3671 | 41.3793 | 1500 | 0.3836 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q8_0-gguf
|
Supa-AI
| 2024-11-15T08:15:07Z
| 6
| 0
| null |
[
"gguf",
"llama-cpp",
"en",
"id",
"jv",
"su",
"base_model:GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct",
"base_model:quantized:GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T08:11:37Z
|
---
base_model: GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
language:
- en
- id
- jv
- su
license: llama3
tags:
- llama-cpp
- gguf
---
# Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q8_0-gguf
This model was converted to GGUF format from [`GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct`](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) using llama.cpp.
Refer to the [original model card](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct) for more details on the model.
## Use with llama.cpp
### CLI:
```bash
llama-cli --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q8_0.gguf -p "Your prompt here"
```
### Server:
```bash
llama-server --hf-repo Supa-AI/llama3-8b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file llama3-8b-cpt-sahabatai-v1-instruct.q8_0.gguf -c 2048
```
## Model Details
- **Quantization Type:** q8_0
- **Original Model:** [GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct)
- **Format:** GGUF
|
Premalatha-success/finetuning-sentiment-model-3000-samples_1
|
Premalatha-success
| 2024-11-15T08:06:54Z
| 119
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-06T12:15:53Z
|
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-3000-samples_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3347
- Accuracy: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
briannlongzhao/hydroflask_textual_inversion
|
briannlongzhao
| 2024-11-15T08:06:07Z
| 14
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-11-15T06:30:53Z
|
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - briannlongzhao/hydroflask_textual_inversion
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF
|
mradermacher
| 2024-11-15T07:57:10Z
| 12
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:johnrhimawan/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2",
"base_model:quantized:johnrhimawan/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T06:56:17Z
|
---
base_model: johnrhimawan/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/johnrhimawan/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2-GGUF/resolve/main/Llama-3.1-8B-Instruct-Grammatical-Error-Correction-2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
charisgao/fine_tuned_main_raid_poetry
|
charisgao
| 2024-11-15T07:54:00Z
| 116
| 0
|
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-11-15T07:52:20Z
|
---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine_tuned_main_raid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_main_raid
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0407
- Accuracy: 0.9922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3543 | 0.0767 | 100 | 0.1765 | 0.9655 |
| 0.1516 | 0.1534 | 200 | 0.1955 | 0.9724 |
| 0.1415 | 0.2301 | 300 | 0.1323 | 0.9724 |
| 0.2002 | 0.3067 | 400 | 0.0993 | 0.9716 |
| 0.1057 | 0.3834 | 500 | 0.2031 | 0.9552 |
| 0.0734 | 0.4601 | 600 | 0.1010 | 0.9802 |
| 0.0725 | 0.5368 | 700 | 0.1511 | 0.9767 |
| 0.1326 | 0.6135 | 800 | 0.0607 | 0.9879 |
| 0.0667 | 0.6902 | 900 | 0.0734 | 0.9845 |
| 0.1132 | 0.7669 | 1000 | 0.0878 | 0.9819 |
| 0.0731 | 0.8436 | 1100 | 0.0694 | 0.9888 |
| 0.0678 | 0.9202 | 1200 | 0.0704 | 0.9853 |
| 0.0455 | 0.9969 | 1300 | 0.0522 | 0.9905 |
| 0.0656 | 1.0736 | 1400 | 0.0646 | 0.9871 |
| 0.0463 | 1.1503 | 1500 | 0.0407 | 0.9922 |
| 0.0432 | 1.2270 | 1600 | 0.0646 | 0.9897 |
| 0.0347 | 1.3037 | 1700 | 0.0421 | 0.9931 |
| 0.0361 | 1.3804 | 1800 | 0.0420 | 0.9931 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
centaur31/distilbert-base-uncased-finetuned-stsb
|
centaur31
| 2024-11-15T07:43:50Z
| 5
| 0
| null |
[
"pytorch",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2024-11-15T07:42:28Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: distilbert-base-uncased-finetuned-stsb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5704
- Pearson: 0.8650
- Spearmanr: 0.8630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.6706 | 0.8571 | 0.8549 |
| 1.0189 | 2.0 | 720 | 0.5704 | 0.8650 | 0.8630 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
CheeLi03/whisper-base-tr-puct-4k
|
CheeLi03
| 2024-11-15T07:38:22Z
| 7
| 0
| null |
[
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"tr",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-11-15T05:30:47Z
|
---
base_model: openai/whisper-base
datasets:
- fleurs
language:
- tr
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Base Turkish Punctuation 4k - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: tr_tr
split: None
args: 'config: tr split: test'
metrics:
- type: wer
value: 37.878198646651626
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Turkish Punctuation 4k - Chee Li
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6273
- Wer: 37.8782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.1116 | 5.5866 | 1000 | 0.4785 | 31.6948 |
| 0.0073 | 11.1732 | 2000 | 0.5710 | 34.9615 |
| 0.0036 | 16.7598 | 3000 | 0.6137 | 36.7349 |
| 0.0027 | 22.3464 | 4000 | 0.6273 | 37.8782 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF
|
mradermacher
| 2024-11-15T07:37:10Z
| 52
| 0
|
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"xpo",
"en",
"dataset:trl-lib/ultrafeedback-prompt",
"base_model:MYC081/Qwen2.5-3B-WPO-bf16-1",
"base_model:quantized:MYC081/Qwen2.5-3B-WPO-bf16-1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-15T07:01:13Z
|
---
base_model: MYC081/Qwen2.5-3B-WPO-bf16-1
datasets: trl-lib/ultrafeedback-prompt
language:
- en
library_name: transformers
model_name: Qwen2.5-3B-WPO-bf16-1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- xpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MYC081/Qwen2.5-3B-WPO-bf16-1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-WPO-bf16-1-i1-GGUF/resolve/main/Qwen2.5-3B-WPO-bf16-1.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
goethe0101/llama-3-2-3B-wame-16bit-survey-generator5
|
goethe0101
| 2024-11-15T07:35:38Z
| 126
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T07:33:48Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** goethe0101
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
masafresh/swin-transformer
|
masafresh
| 2024-11-15T07:34:37Z
| 213
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-11-15T03:29:15Z
|
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: swin-transformer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-transformer
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7366
- Accuracy: 0.39
- F1: 0.2753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| No log | 0.7273 | 2 | 2.0766 | 0.3 | 0.2161 |
| No log | 1.8182 | 5 | 1.7687 | 0.37 | 0.2461 |
| No log | 2.1818 | 6 | 1.7366 | 0.39 | 0.2753 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.1
|
hanslab37/sd-class-butterflies-32
|
hanslab37
| 2024-11-15T07:31:04Z
| 44
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-11-15T07:30:44Z
|
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('hanslab37/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
mradermacher/DistilabelBeagle14-7B-GGUF
|
mradermacher
| 2024-11-15T07:28:56Z
| 46
| 0
|
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"dpo",
"rlhf",
"rlaif",
"distilabel",
"en",
"base_model:argilla/DistilabelBeagle14-7B",
"base_model:quantized:argilla/DistilabelBeagle14-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-11T18:39:34Z
|
---
base_model: argilla/DistilabelBeagle14-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- dpo
- rlhf
- rlaif
- distilabel
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/argilla/DistilabelBeagle14-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DistilabelBeagle14-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DistilabelBeagle14-7B-GGUF/resolve/main/DistilabelBeagle14-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AAbduallah1/Finetuned-meta-llama-Llama-3.2-3B-instruct
|
AAbduallah1
| 2024-11-15T07:27:03Z
| 128
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T07:24:57Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlx-community/falcon-mamba-7b-8bit-instruct
|
mlx-community
| 2024-11-15T07:24:37Z
| 8
| 0
|
mlx
|
[
"mlx",
"safetensors",
"falcon_mamba",
"text-generation",
"conversational",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:HuggingFaceFW/fineweb-edu",
"base_model:tiiuae/falcon-mamba-7b-instruct",
"base_model:quantized:tiiuae/falcon-mamba-7b-instruct",
"license:other",
"8-bit",
"region:us"
] |
text-generation
| 2024-11-15T07:21:45Z
|
---
base_model: tiiuae/falcon-mamba-7b-instruct
datasets:
- tiiuae/falcon-refinedweb
- HuggingFaceFW/fineweb-edu
language:
- en
license: other
license_name: falcon-mamba-7b-license
license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html
pipeline_tag: text-generation
tags:
- mlx
inference: true
---
# mlx-community/falcon-mamba-7b-8bit-instruct
The Model [mlx-community/falcon-mamba-7b-8bit-instruct](https://huggingface.co/mlx-community/falcon-mamba-7b-8bit-instruct) was converted to MLX format from [tiiuae/falcon-mamba-7b-instruct](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/falcon-mamba-7b-8bit-instruct")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ssunbear/bert-base-finetuned-ynat
|
ssunbear
| 2024-11-15T07:13:03Z
| 8
| 0
| null |
[
"safetensors",
"bert",
"text-classification",
"ko",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:mit",
"region:us"
] |
text-classification
| 2024-11-07T14:19:33Z
|
---
license: mit
language:
- ko
metrics:
- f1
- accuracy
base_model:
- klue/bert-base
pipeline_tag: text-classification
---
# ssunbear/bert-base-finetuned-ynat
이 모델은 "**부스트캠프 AI tech 7기 NLP - 주제 분류 프로젝트**"에서 제공한 비공식 데이터셋(klue/bert-base-ynat 가공 데이터)을 사용하여 [klue/bert-base](https://huggingface.co/klue/bert-base) 모델을 파인튜닝한 것입니다.
## 모델 설명
이 모델은 주제 분류를 위해 설계되었으며, 부스트캠프 AI 기술 과정에서 수집된 데이터를 기반으로 학습되었습니다. 대회에서 제공된 일부 데이터를 삭제 및 증강 처리하여 새롭게 재구성하였습니다. (저작권 문제로 비공식 데이터셋입니다.)
- 업데이트: ssunbear/bert-base-finetuned-ynat-v2 -> 성능 향상
## 성능
- **F1 Score**: 0.8315
- **Accuracy**: 0.8375
## 사용 방법
이 모델은 Hugging Face Transformers 라이브러리를 사용하여 쉽게 로드하고 사용할 수 있습니다:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "ssunbear/bert-base-finetuned-ynat"
# 모델과 토크나이저 로드
tokenizer = AutoTokenizer.from_pretrained("klue/bert-base")
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=7)
```
## 훈련 데이터
- 데이터 출처: 부스트캠프 AI tech 7기
- 데이터 유형: 텍스트 분류용 비공식 데이터셋
## 라이센스
모델은 비공식 데이터셋으로 훈련되었으며, 저작권 문제로 인해 공개되지 않은 데이터가 포함되어 있습니다.
|
ElderlyDed/whisper-small-ru-v2
|
ElderlyDed
| 2024-11-15T07:11:34Z
| 105
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-14T11:22:04Z
|
---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Ru V2- Agas
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: ru
split: None
args: 'config: ru, split: test'
metrics:
- name: Wer
type: wer
value: 26.755885513333983
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ru V2- Agas
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3938
- Wer: 26.7559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0 | 9.0909 | 1000 | 0.3401 | 24.5543 |
| 0.0 | 18.1818 | 2000 | 0.3726 | 26.0074 |
| 0.0 | 27.2727 | 3000 | 0.3879 | 26.5935 |
| 0.0 | 36.3636 | 4000 | 0.3938 | 26.7559 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mlx-community/falcon-mamba-7b-4bit-instruct
|
mlx-community
| 2024-11-15T07:08:18Z
| 16
| 0
|
mlx
|
[
"mlx",
"safetensors",
"falcon_mamba",
"text-generation",
"conversational",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:HuggingFaceFW/fineweb-edu",
"base_model:tiiuae/falcon-mamba-7b-instruct",
"base_model:quantized:tiiuae/falcon-mamba-7b-instruct",
"license:other",
"4-bit",
"region:us"
] |
text-generation
| 2024-11-15T07:06:49Z
|
---
base_model: tiiuae/falcon-mamba-7b-instruct
datasets:
- tiiuae/falcon-refinedweb
- HuggingFaceFW/fineweb-edu
language:
- en
license: other
license_name: falcon-mamba-7b-license
license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html
pipeline_tag: text-generation
tags:
- mlx
inference: true
---
# mlx-community/falcon-mamba-7b-4bit-instruct
The Model [mlx-community/falcon-mamba-7b-4bit-instruct](https://huggingface.co/mlx-community/falcon-mamba-7b-4bit-instruct) was converted to MLX format from [tiiuae/falcon-mamba-7b-instruct](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/falcon-mamba-7b-4bit-instruct")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
hfcsrd/sn29_v2_updated_2
|
hfcsrd
| 2024-11-15T07:05:34Z
| 39
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T07:02:41Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
|
EpistemeAI2
| 2024-11-15T07:00:48Z
| 22
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"arxiv:2210.03629",
"base_model:EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code",
"base_model:finetune:EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-11T14:00:25Z
|
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code
model-index:
- name: Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 46.33
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 26.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 10.5
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.28
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.01
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.5
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
name: Open LLM Leaderboard
---
# Agent LLama with tasks
Experimental and revolutionary fine-tune technique to allow LLama 3.1 8B to be agentic coder with tasks and CoT(Chain of Thought). It fine tuned with code dataset and Glaive's Cot Tasks dataset for Coder Agent.
It has some build-in agent features:
- search
- calculator
- ReAct. [Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629)
- fine tuned ReAct for better responses
Other noticable features:
- Self learning using unsloth. (in progress)
- can be used in RAG applications
- Memory. [**please use Langchain memory , section Message persistence**](https://python.langchain.com/docs/tutorials/chatbot/)
It is perfectly use for Langchain or LLamaIndex.
Context Window: 128K
### Installation
```bash
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
```
Developers can easily integrate EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
Optional: to use build in tool, please add to system prompt: "Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n"
## Use Alpaca Prompt template:
```python
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instructions:
{}
### Input:
{}
### Response:
{}"""
```
## Recommend system prompt for generatel use:
```python
"""
You should reason about the input and provide a logical explanation.
The explanation should follow these rules:
- The explanation should be written at graduate level engineering, science, math and literature
- The explanation should be split into subtasks
- The explanation should always end with 2-3 related concepts.
- subtasks have their own chain of thoughts
"""
```
## Recommend system prompt for coding:
```python
"""
Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n
You are a coding assistant with expert with everything\n
Ensure any code you provide can be executed \n
with all required imports and variables defined. List the imports. Structure your answer with a description of the code solution. \n
write only the code. do not print anything else.\n
debug code if error occurs. \n
Here is the user question: {question}
"""
```
### Conversational Use-case
#### Use with [Transformers](https://github.com/huggingface/transformers)
##### Using `transformers.pipeline()` API , best use for 4bit for fast response.
```python
import transformers
import torch
from langchain_community.llms import HuggingFaceEndpoint
from langchain_community.chat_models.huggingface import ChatHuggingFace
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="float16",
bnb_4bit_use_double_quant=True,
)
model_id = EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"quantization_config": quantization_config}, #for fast response. For full 16bit inference, remove this code.
device_map="auto",
)
messages = [
{"role": "system", "content": """
Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n
You are a coding assistant with expert with everything\n
Ensure any code you provide can be executed \n
with all required imports and variables defined. List the imports. Structure your answer with a description of the code solution. \n
write only the code. do not print anything else.\n
debug code if error occurs. \n
Here is the user question: {question}
"""},
{"role": "user", "content": "Create a bar plot showing the market capitalization of the top 7 publicly listed companies using matplotlib"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1])
```
# Example:
Please go to Colab for sample of the code using Langchain [Colab](https://colab.research.google.com/drive/129SEHVRxlr24r73yf34BKnIHOlD3as09?authuser=1)
# Unsloth Fast
```python
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install unsloth
# Get latest Unsloth
!pip install --upgrade --no-deps "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install langchain_experimental
from unsloth import FastLanguageModel
from google.colab import userdata
# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = [
"unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"unsloth/gemma-7b-it-bnb-4bit",
] # More models at https://huggingface.co/unsloth
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT",
max_seq_length = 128000,
load_in_4bit = True,
token =userdata.get('HF_TOKEN')
)
def chatbot(query):
messages = [
{"from": "system", "value":
"""
Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n
You are a coding assistant with expert with everything\n
Ensure any code you provide can be executed \n
with all required imports and variables defined. List the imports. Structure your answer with a description of the code solution. \n
write only the code. do not print anything else.\n
use ipython for search tool. \n
debug code if error occurs. \n
Here is the user question: {question}
"""
},
{"from": "human", "value": query},
]
inputs = tokenizer.apply_chat_template(messages, tokenize = True, add_generation_prompt = True, return_tensors = "pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 2048, use_cache = True)
```
# Response
```python
<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instructions:
Concept: Abductive Reasoning
### Input:
Split in tasks for Concept: Mathematical Proof Structuring
### Response 1:
Task 1:
- Identify the key components of a mathematical proof, including the statement, assumptions, and conclusion.
- Explain the role of each component in the overall structure of the proof.
Task 2:
- Describe the process of abductive reasoning in the context of mathematical proof.
- Provide an example of how abductive reasoning can be used to infer a mathematical statement.
Task 3:
- Discuss the importance of structuring a mathematical proof.
- Explain how a well-structured proof can help to establish the validity of a mathematical statement.
Task 4:
- Compare and contrast deductive and abductive reasoning in the context of mathematical proof.
- Provide an example of how each type of reasoning can be used to establish the validity of a mathematical statement.
Task 5:
- Design a template for structuring a mathematical proof.
- Explain how the template can be used to ensure that all necessary components are included in the proof.<|im_end|>
```
### Response 2:
```python
<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instructions:
Concept: Occam's Razor
### Input:
Split in tasks for Concept: Occam's Razor and perform the task
### Response:
Split the task into the following subtasks:
1. **Task 1: Define Occam's Razor**
* Definition: Occam's Razor is a principle used in problem-solving and evidence-based reasoning, stating that, all things being equal, the simplest explanation or solution is usually the best one.
2. **Task 2: Explain the Key Components of Occam's Razor**
* Components:
* **Simplicity**: The principle favors explanations that require fewer assumptions or hypothetical entities.
* **Evidence**: It emphasizes the importance of empirical evidence in supporting or rejecting explanations.
* ** Parsimony**: Occam's Razor encourages the selection of explanations that are more parsimonious, meaning they require fewer assumptions or entities.
3. **Task 3: Provide Examples of Occam's Razor in Action**
* Examples:
* **The Solar System**: The ancient Greeks proposed a complex system with multiple celestial spheres. Occam's Razor would suggest a simpler explanation, like the Copernican heliocentric model.
* **Medical Diagnosis**: A doctor might initially suspect a rare disease, but Occam's Razor would favor a more common and simpler explanation, such as a viral infection
```
# Execute code (Make sure to use virtual environments)
```bash
python3 -m venv env
source env/bin/activate
```
## Execution code responses from Llama
#### Please use execute python code function for local. For langchain, please use Python REPL() to execute code
execute code funciton locally in python:
```python
def execute_Python_code(code):
# A string stream to capture the outputs of exec
output = io.StringIO()
try:
# Redirect stdout to the StringIO object
with contextlib.redirect_stdout(output):
# Allow imports
exec(code, globals())
except Exception as e:
# If an error occurs, capture it as part of the output
print(f"Error: {e}", file=output)
return output.getvalue()
```
Langchain python Repl
- Install
```bash
!pip install langchain_experimental
```
Code:
```python
from langchain_core.tools import Tool
from langchain_experimental.utilities import PythonREPL
python_repl = PythonREPL()
# You can create the tool to pass to an agent
repl_tool = Tool(
name="python_repl",
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
func=python_repl.run,
)
repl_tool(outputs[0]["generated_text"][-1])
```
# Safety inputs/ outputs procedures
Fo all inputs, please use Llama-Guard: meta-llama/Llama-Guard-3-8B for safety classification.
Go to model card [Llama-Guard](https://huggingface.co/meta-llama/Llama-Guard-3-8B)
## Other usess
#### ToT - Tree of Thought
- Use system prompt:
```python
"Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they're wrong at any point then they leave.
The question is..."
```
#### ReAct
example from langchain agent - [langchain React agent](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/react/agent.py)
- Use system prompt:
```python
"""
Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}
"""
```
# Uploaded model
- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_EpistemeAI2__Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT)
| Metric |Value|
|-------------------|----:|
|Avg. |20.84|
|IFEval (0-Shot) |46.33|
|BBH (3-Shot) |26.40|
|MATH Lvl 5 (4-Shot)|10.50|
|GPQA (0-shot) | 8.28|
|MuSR (0-shot) | 5.01|
|MMLU-PRO (5-shot) |28.50|
|
CheeLi03/whisper-base-pt-puct-5k
|
CheeLi03
| 2024-11-15T06:57:50Z
| 87
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-11-15T04:15:55Z
|
---
base_model: openai/whisper-base
datasets:
- fleurs
language:
- pt
library_name: transformers
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Base Portugese Punctuation 5k - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: pt_br
split: None
args: 'config: pt split: test'
metrics:
- type: wer
value: 34.92197781537883
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Portugese Punctuation 5k - Chee Li
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5540
- Wer: 34.9220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0508 | 5.0251 | 1000 | 0.4118 | 56.8105 |
| 0.0041 | 10.0503 | 2000 | 0.4887 | 45.7558 |
| 0.0019 | 15.0754 | 3000 | 0.5250 | 38.7902 |
| 0.0012 | 20.1005 | 4000 | 0.5450 | 34.5742 |
| 0.001 | 25.1256 | 5000 | 0.5540 | 34.9220 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.20.3
|
jewoos/distilgpt2-tweetsumm-finetune
|
jewoos
| 2024-11-15T06:56:42Z
| 126
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-08T01:45:15Z
|
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
goethe0101/llama-3-2-3B-wame-16bit-survey-generator4
|
goethe0101
| 2024-11-15T06:55:16Z
| 123
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T06:53:29Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** goethe0101
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Tymkolt/dragoman-F16-GGUF
|
Tymkolt
| 2024-11-15T06:49:31Z
| 5
| 0
|
peft
|
[
"peft",
"gguf",
"translation",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"uk",
"en",
"dataset:Helsinki-NLP/opus_paracrawl",
"dataset:turuta/Multi30k-uk",
"base_model:lang-uk/dragoman",
"base_model:adapter:lang-uk/dragoman",
"license:apache-2.0",
"model-index",
"region:us"
] |
text-generation
| 2024-11-15T06:49:22Z
|
---
license: apache-2.0
datasets:
- Helsinki-NLP/opus_paracrawl
- turuta/Multi30k-uk
language:
- uk
- en
metrics:
- bleu
library_name: peft
pipeline_tag: text-generation
base_model: lang-uk/dragoman
tags:
- translation
- llama-cpp
- gguf-my-lora
widget:
- text: '[INST] who holds this neighborhood? [/INST]'
model-index:
- name: Dragoman
results:
- task:
type: translation
name: English-Ukrainian Translation
dataset:
name: FLORES-101
type: facebook/flores
config: eng_Latn-ukr_Cyrl
split: devtest
metrics:
- type: bleu
value: 32.34
name: Test BLEU
---
# Tymkolt/dragoman-F16-GGUF
This LoRA adapter was converted to GGUF format from [`lang-uk/dragoman`](https://huggingface.co/lang-uk/dragoman) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/lang-uk/dragoman) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora dragoman-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora dragoman-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
gavinqiangli/bge-large-mpnet-base-all-nli-triplet-final
|
gavinqiangli
| 2024-11-15T06:46:13Z
| 8
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-large-en",
"base_model:finetune:BAAI/bge-large-en",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-11-15T06:44:57Z
|
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-large-en
widget:
- source_sentence: A construction worker is standing on a crane placing a large arm
on top of a stature in progress.
sentences:
- A man is playing with his camera.
- A person standing
- Nobody is standing
- source_sentence: A boy in red slides down an inflatable ride.
sentences:
- a baby smiling
- A boy is playing on an inflatable ride.
- A boy pierces a knife through an inflatable ride.
- source_sentence: A man in a black shirt is playing a guitar.
sentences:
- A group of women are selling their wares
- The man is wearing black.
- The man is wearing a blue shirt.
- source_sentence: A man with a large power drill standing next to his daughter with
a vacuum cleaner hose.
sentences:
- A man holding a drill stands next to a girl holding a vacuum hose.
- Kids ride an amusement ride.
- The man and girl are painting the walls.
- source_sentence: A middle-aged man works under the engine of a train on rail tracks.
sentences:
- A guy is working on a train.
- Two young asian men are squatting.
- A guy is driving to work.
datasets:
- sentence-transformers/all-nli
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on BAAI/bge-large-en
results:
- task:
type: triplet
name: Triplet
dataset:
name: all nli test
type: all-nli-test
metrics:
- type: cosine_accuracy
value: 0.8332576789226812
name: Cosine Accuracy
---
# SentenceTransformer based on BAAI/bge-large-en
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) <!-- at revision abe7d9d814b775ca171121fb03f394dc42974275 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("gavinqiangli/bge-large-mpnet-base-all-nli-triplet-final")
# Run inference
sentences = [
'A middle-aged man works under the engine of a train on rail tracks.',
'A guy is working on a train.',
'A guy is driving to work.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `all-nli-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.8333** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.46 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.81 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.95 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.78 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.35 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | all-nli-test_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:----------------------------:|
| 0.5333 | 1000 | 0.7168 | 0.6448 | - |
| 1.0 | 1875 | - | - | 0.8333 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.5.0+cu121
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
TIGER-Lab/Mantis-8B-clip-llama3
|
TIGER-Lab
| 2024-11-15T06:43:17Z
| 426
| 1
|
transformers
|
[
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"multimodal",
"llama3",
"clip",
"lmm",
"vlm",
"mantis",
"conversational",
"en",
"dataset:TIGER-Lab/Mantis-Instruct",
"arxiv:2405.01483",
"base_model:TIGER-Lab/Mantis-8B-clip-llama3-pretraind",
"base_model:finetune:TIGER-Lab/Mantis-8B-clip-llama3-pretraind",
"license:llama3",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-03T02:53:21Z
|
---
base_model: TIGER-Lab/Mantis-8B-clip-llama3-pretraind
tags:
- multimodal
- llava
- llama3
- clip
- lmm
- vlm
- mantis
model-index:
- name: llava_clip_llama3_8b_finetune_8192
results: []
license: llama3
datasets:
- TIGER-Lab/Mantis-Instruct
language:
- en
metrics:
- accuracy
---
# 🔥 Mantis (TMLR 2024)
[Paper](https://arxiv.org/abs/2405.01483) |
[Website](https://tiger-ai-lab.github.io/Mantis/) |
[Github](https://github.com/TIGER-AI-Lab/Mantis) |
[Models](https://huggingface.co/collections/TIGER-Lab/mantis-6619b0834594c878cdb1d6e4) |
[Demo](https://huggingface.co/spaces/TIGER-Lab/Mantis) |
[Wandb](https://api.wandb.ai/links/dongfu/qyenqjoe)

## Summary
- Mantis is an LLaMA-3 based LMM with **interleaved text and image as inputs**, train on Mantis-Instruct under academic-level resources (i.e. 36 hours on 16xA100-40G).
- Mantis is trained to have multi-image skills including co-reference, reasoning, comparing, temporal understanding.
- Mantis reaches the state-of-the-art performance on five multi-image benchmarks (NLVR2, Q-Bench, BLINK, MVBench, Mantis-Eval), and also maintain a strong single-image performance on par with CogVLM and Emu2.
## Multi-Image Performance
| Models | Size | Format | NLVR2 | Q-Bench | Mantis-Eval | BLINK | MVBench | Avg |
|--------------------|:----:|:--------:|:-----:|:-------:|:-----------:|:-----:|:-------:|:----:|
| GPT-4V | - | sequence | 88.80 | 76.52 | 62.67 | 51.14 | 43.50 | 64.5 |
| Open Source Models | | | | | | | | |
| Random | - | - | 48.93 | 40.20 | 23.04 | 38.09 | 27.30 | 35.5 |
| Kosmos2 | 1.6B | merge | 49.00 | 35.10 | 30.41 | 37.50 | 21.62 | 34.7 |
| LLaVA-v1.5 | 7B | merge | 53.88 | 49.32 | 31.34 | 37.13 | 36.00 | 41.5 |
| LLava-V1.6 | 7B | merge | 58.88 | 54.80 | 45.62 | 39.55 | 40.90 | 48.0 |
| Qwen-VL-Chat | 7B | merge | 58.72 | 45.90 | 39.17 | 31.17 | 42.15 | 43.4 |
| Fuyu | 8B | merge | 51.10 | 49.15 | 27.19 | 36.59 | 30.20 | 38.8 |
| BLIP-2 | 13B | merge | 59.42 | 51.20 | 49.77 | 39.45 | 31.40 | 46.2 |
| InstructBLIP | 13B | merge | 60.26 | 44.30 | 45.62 | 42.24 | 32.50 | 45.0 |
| CogVLM | 17B | merge | 58.58 | 53.20 | 45.16 | 41.54 | 37.30 | 47.2 |
| OpenFlamingo | 9B | sequence | 36.41 | 19.60 | 12.44 | 39.18 | 7.90 | 23.1 |
| Otter-Image | 9B | sequence | 49.15 | 17.50 | 14.29 | 36.26 | 15.30 | 26.5 |
| Idefics1 | 9B | sequence | 54.63 | 30.60 | 28.11 | 24.69 | 26.42 | 32.9 |
| VideoLLaVA | 7B | sequence | 56.48 | 45.70 | 35.94 | 38.92 | 44.30 | 44.3 |
| Emu2-Chat | 37B | sequence | 58.16 | 50.05 | 37.79 | 36.20 | 39.72 | 44.4 |
| Vila | 8B | sequence | 76.45 | 45.70 | 51.15 | 39.30 | 49.40 | 52.4 |
| Idefics2 | 8B | sequence | 86.87 | 57.00 | 48.85 | 45.18 | 29.68 | 53.5 |
| Mantis-CLIP | 8B | sequence | 84.66 | 66.00 | 55.76 | 47.06 | 48.30 | 60.4 |
| Mantis-SIGLIP | 8B | sequence | 87.43 | 69.90 | **59.45** | 46.35 | 50.15 | 62.7 |
| Mantis-Flamingo | 9B | sequence | 52.96 | 46.80 | 32.72 | 38.00 | 40.83 | 42.3 |
| Mantis-Idefics2 | 8B | sequence | **89.71** | **75.20** | 57.14 | **49.05** | **51.38** | **64.5** |
| $\Delta$ over SOTA | - | - | +2.84 | +18.20 | +8.30 | +3.87 | +1.98 | +11.0 |
## Single-Image Performance
| Model | Size | TextVQA | VQA | MMB | MMMU | OKVQA | SQA | MathVista | Avg |
|-----------------|:----:|:-------:|:----:|:----:|:----:|:-----:|:----:|:---------:|:----:|
| OpenFlamingo | 9B | 46.3 | 58.0 | 32.4 | 28.7 | 51.4 | 45.7 | 18.6 | 40.2 |
| Idefics1 | 9B | 39.3 | 68.8 | 45.3 | 32.5 | 50.4 | 51.6 | 21.1 | 44.1 |
| InstructBLIP | 7B | 33.6 | 75.2 | 38.3 | 30.6 | 45.2 | 70.6 | 24.4 | 45.4 |
| Yi-VL | 6B | 44.8 | 72.5 | 68.4 | 39.1 | 51.3 | 71.7 | 29.7 | 53.9 |
| Qwen-VL-Chat | 7B | 63.8 | 78.2 | 61.8 | 35.9 | 56.6 | 68.2 | 15.5 | 54.3 |
| LLaVA-1.5 | 7B | 58.2 | 76.6 | 64.8 | 35.3 | 53.4 | 70.4 | 25.6 | 54.9 |
| Emu2-Chat | 37B | <u>66.6</u> | **84.9** | 63.6 | 36.3 | **64.8** | 65.3 | 30.7 | 58.9 |
| CogVLM | 17B | **70.4** | <u>82.3</u> | 65.8 | 32.1 | <u>64.8</u> | 65.6 | 35.0 | 59.4 |
| Idefics2 | 8B | 70.4 | 79.1 | <u>75.7</u> | **43.0** | 53.5 | **86.5** | **51.4** | **65.7** |
| Mantis-CLIP | 8B | 56.4 | 73.0 | 66.0 | 38.1 | 53.0 | 73.8 | 31.7 | 56.0 |
| Mantis-SigLIP | 8B | 59.2 | 74.9 | 68.7 | 40.1 | 55.4 | 74.9 | 34.4 | 58.2 |
| Mantis-Idefics2 | 8B | 63.5 | 77.6 | 75.7 | <u>41.1</u> | 52.6 | <u>81.3</u> | <u>40.4</u> | <u>61.7</u> |
## How to use
### Installation
```bash
# This only installs minimum packages (torch, transformers, accelerate) for inference, no redundant packages are installed.
pip install git+https://github.com/TIGER-AI-Lab/Mantis.git
```
### Run example inference:
```python
from mantis.models.mllava import chat_mllava
from PIL import Image
import torch
image1 = "image1.jpg"
image2 = "image2.jpg"
images = [Image.open(image1), Image.open(image2)]
# load processor and model
from mantis.models.mllava import MLlavaProcessor, LlavaForConditionalGeneration
processor = MLlavaProcessor.from_pretrained("TIGER-Lab/Mantis-8B-clip-llama3")
attn_implementation = None # or "flash_attention_2"
model = LlavaForConditionalGeneration.from_pretrained("TIGER-Lab/Mantis-8B-clip-llama3", device_map="cuda", torch_dtype=torch.bfloat16, attn_implementation=attn_implementation)
generation_kwargs = {
"max_new_tokens": 1024,
"num_beams": 1,
"do_sample": False
}
# chat
text = "Describe the difference of <image> and <image> as much as you can."
response, history = chat_mllava(text, images, model, processor, **generation_kwargs)
print("USER: ", text)
print("ASSISTANT: ", response)
text = "How many wallets are there in image 1 and image 2 respectively?"
response, history = chat_mllava(text, images, model, processor, history=history, **generation_kwargs)
print("USER: ", text)
print("ASSISTANT: ", response)
"""
USER: Describe the difference of <image> and <image> as much as you can.
ASSISTANT: The second image has more variety in terms of colors and designs. While the first image only shows two brown leather pouches, the second image features four different pouches in various colors and designs, including a purple one with a gold coin, a red one with a gold coin, a black one with a gold coin, and a brown one with a gold coin. This variety makes the second image more visually interesting and dynamic.
USER: How many wallets are there in image 1 and image 2 respectively?
ASSISTANT: There are two wallets in image 1, and four wallets in image 2.
"""
```
### Training
See [mantis/train](https://github.com/TIGER-AI-Lab/Mantis/tree/main/mantis/train) for details
### Evaluation
See [mantis/benchmark](https://github.com/TIGER-AI-Lab/Mantis/tree/main/mantis/benchmark) for details
## Citation
```
@article{Jiang2024MANTISIM,
title={MANTIS: Interleaved Multi-Image Instruction Tuning},
author={Dongfu Jiang and Xuan He and Huaye Zeng and Cong Wei and Max W.F. Ku and Qian Liu and Wenhu Chen},
journal={Transactions on Machine Learning Research},
year={2024},
volume={2024},
url={https://openreview.net/forum?id=skLtdUVaJa}
}
```
|
mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF
|
mradermacher
| 2024-11-15T06:43:09Z
| 140
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"de",
"fr",
"zh",
"pt",
"nl",
"ru",
"ko",
"it",
"es",
"base_model:Unbabel/TowerInstruct-WMT24-Chat-7B",
"base_model:quantized:Unbabel/TowerInstruct-WMT24-Chat-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T06:28:15Z
|
---
base_model: Unbabel/TowerInstruct-WMT24-Chat-7B
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Unbabel/TowerInstruct-WMT24-Chat-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TowerInstruct-WMT24-Chat-7B-GGUF/resolve/main/TowerInstruct-WMT24-Chat-7B.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ellight/code-smolLM2-135m-text-to-sql
|
Ellight
| 2024-11-15T06:42:28Z
| 127
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T05:46:33Z
|
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: code-smolLM2-135m-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-smolLM2-135m-text-to-sql
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1
|
DaniilOr/multilingual_persuasion_techniques
|
DaniilOr
| 2024-11-15T06:40:32Z
| 126
| 0
|
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-11T15:13:50Z
|
---
license: mit
library_name: transformers
---
|
ssai0915/topic_learning_llama
|
ssai0915
| 2024-11-15T06:32:02Z
| 180
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T06:31:46Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Serendien/topic_learning_llama
|
Serendien
| 2024-11-15T06:31:30Z
| 180
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T06:31:15Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dlby/topic_learning_llama
|
dlby
| 2024-11-15T06:29:26Z
| 180
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T06:28:54Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf
|
RichardErkhov
| 2024-11-15T06:25:15Z
| 364
| 0
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-15T05:11:49Z
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
asm2asm-deepseek1.3b-xtokenizer-armv8 - GGUF
- Model creator: https://huggingface.co/ahmedheakl/
- Original model: https://huggingface.co/ahmedheakl/asm2asm-deepseek1.3b-xtokenizer-armv8/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q2_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q2_K.gguf) | Q2_K | 0.52GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q3_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q3_K.gguf) | Q3_K | 0.66GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q3_K_M.gguf) | Q3_K_M | 0.66GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q3_K_L.gguf) | Q3_K_L | 0.69GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_0.gguf) | Q4_0 | 0.72GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.IQ4_NL.gguf) | IQ4_NL | 0.73GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_K_S.gguf) | Q4_K_S | 0.76GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_K.gguf) | Q4_K | 0.81GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_K_M.gguf) | Q4_K_M | 0.81GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q4_1.gguf) | Q4_1 | 0.8GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_0.gguf) | Q5_0 | 0.87GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_K_S.gguf) | Q5_K_S | 0.89GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_K.gguf) | Q5_K | 0.93GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_K_M.gguf) | Q5_K_M | 0.93GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_1.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q5_1.gguf) | Q5_1 | 0.95GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q6_K.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q6_K.gguf) | Q6_K | 1.09GB |
| [asm2asm-deepseek1.3b-xtokenizer-armv8.Q8_0.gguf](https://huggingface.co/RichardErkhov/ahmedheakl_-_asm2asm-deepseek1.3b-xtokenizer-armv8-gguf/blob/main/asm2asm-deepseek1.3b-xtokenizer-armv8.Q8_0.gguf) | Q8_0 | 1.33GB |
Original model description:
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: asm2asm-deepseek-1.3b-500k-mac-x86-O0-arm-gnueabi-gcc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asm2asm-deepseek-1.3b-500k-mac-x86-O0-arm-gnueabi-gcc
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
|
stablediffusionapi/cleanDrawCartoonStyle
|
stablediffusionapi
| 2024-11-15T06:22:54Z
| 31
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-11-15T06:21:02Z
|
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "cleanDrawCartoonStyle"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/cleanDrawCartoonStyle)
Model link: [View model](https://modelslab.com/models/cleanDrawCartoonStyle)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "cleanDrawCartoonStyle",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
idoo0/vit-plant-test
|
idoo0
| 2024-11-15T06:18:05Z
| 9
| 0
| null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-11-15T06:17:53Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vit-plant-test
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5421686768531799
---
# vit-plant-test
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Anthracnose Plant Disease

#### Bacterial Spot Plant Disease

#### Black Rot Plant Disease

#### Black Spot Plant Disease

#### Downy Mildew Plant Disease

#### Early Blight Plant Disease

#### Late Blight Plant Disease

#### Leaf Spot Plant Disease

#### Powdery Mildew Plant Disease

#### Rust Plant Disease

#### Spider Spot Plant Disease

#### Viral Plant Disease

|
Mimi-333/Llama-3.1-70B-Japanese-Instruct-2407-GGUF
|
Mimi-333
| 2024-11-15T06:18:04Z
| 22
| 1
| null |
[
"gguf",
"base_model:cyberagent/Llama-3.1-70B-Japanese-Instruct-2407",
"base_model:quantized:cyberagent/Llama-3.1-70B-Japanese-Instruct-2407",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-14T08:03:00Z
|
---
license: llama3.1
base_model:
- cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
---
Quantized By [llama.cpp](https://github.com/ggerganov/llama.cpp) [Release b4077](https://github.com/ggerganov/llama.cpp/releases/tag/b4077)
|
mradermacher/AMD-Llama-135m-code-i1-GGUF
|
mradermacher
| 2024-11-15T06:17:38Z
| 16
| 0
|
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:manu/project_gutenberg",
"base_model:amd/AMD-Llama-135m-code",
"base_model:quantized:amd/AMD-Llama-135m-code",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-11-15T06:03:36Z
|
---
base_model: amd/AMD-Llama-135m-code
datasets:
- cerebras/SlimPajama-627B
- manu/project_gutenberg
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/amd/AMD-Llama-135m-code
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AMD-Llama-135m-code-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 0.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 0.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 0.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/AMD-Llama-135m-code-i1-GGUF/resolve/main/AMD-Llama-135m-code.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlx-community/falcon-mamba-7b-4bit
|
mlx-community
| 2024-11-15T06:14:16Z
| 5
| 0
|
mlx
|
[
"mlx",
"safetensors",
"falcon_mamba",
"en",
"dataset:tiiuae/falcon-refinedweb",
"dataset:HuggingFaceFW/fineweb-edu",
"base_model:tiiuae/falcon-mamba-7b",
"base_model:quantized:tiiuae/falcon-mamba-7b",
"license:other",
"model-index",
"4-bit",
"region:us"
] | null | 2024-11-15T06:12:45Z
|
---
base_model: tiiuae/falcon-mamba-7b
datasets:
- tiiuae/falcon-refinedweb
- HuggingFaceFW/fineweb-edu
language:
- en
license: other
license_name: falcon-mamba-7b-license
license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html
tags:
- mlx
model-index:
- name: falcon-mamba-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 33.36
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 19.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 3.63
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.86
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 14.47
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b
name: Open LLM Leaderboard
---
# mlx-community/falcon-mamba-7b-4bit
The Model [mlx-community/falcon-mamba-7b-4bit](https://huggingface.co/mlx-community/falcon-mamba-7b-4bit) was converted to MLX format from [tiiuae/falcon-mamba-7b](https://huggingface.co/tiiuae/falcon-mamba-7b) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/falcon-mamba-7b-4bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ssai0915/fake_new_data_train_llama
|
ssai0915
| 2024-11-15T06:02:09Z
| 180
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-15T06:01:57Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wsklee/distilbert-sentiment-imdb-cft
|
wsklee
| 2024-11-15T05:41:40Z
| 159
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-11-15T05:25:35Z
|
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-sentiment-imdb-cft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-sentiment-imdb-cft
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.9856
- Pos Similarity: 0.9538
- Neg Similarity: 0.4913
- F1: 0.9927
- Precision: 1.0
- Recall: 0.9856
- Loss: 3.5397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Accuracy | Pos Similarity | Neg Similarity | F1 | Precision | Recall | Validation Loss |
|:-------------:|:------:|:----:|:--------:|:--------------:|:--------------:|:------:|:---------:|:------:|:---------------:|
| 3.8563 | 1.1364 | 200 | 0.9728 | 0.9662 | 0.7048 | 0.9862 | 1.0 | 0.9728 | 3.5778 |
| 3.5857 | 2.2727 | 400 | 0.9848 | 0.9666 | 0.5691 | 0.9923 | 1.0 | 0.9848 | 3.5278 |
| 3.5032 | 3.4091 | 600 | 0.9856 | 0.9538 | 0.4913 | 0.9927 | 1.0 | 0.9856 | 3.5397 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
idoo0/test-vit
|
idoo0
| 2024-11-15T05:39:17Z
| 5
| 0
| null |
[
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] |
image-classification
| 2024-11-15T05:39:08Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: test-vit
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8656716346740723
---
# test-vit
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
Kapzo/demo-donut_extraction-v4
|
Kapzo
| 2024-11-15T05:38:37Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-11-15T02:43:22Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Beehzod/speecht5_finetuned_uz_customData2
|
Beehzod
| 2024-11-15T05:36:32Z
| 335
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-11-15T05:19:01Z
|
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_uz_customData2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_uz_customData2
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5399 | 3.1217 | 100 | 0.4750 |
| 0.4713 | 6.2433 | 200 | 0.4548 |
| 0.444 | 9.3650 | 300 | 0.4334 |
| 0.4355 | 12.4867 | 400 | 0.4348 |
| 0.4214 | 15.6084 | 500 | 0.4331 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.