modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 12:29:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 12:29:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Triangle104/dolphin-2.9.4-gemma2-2b-Q8_0-GGUF
|
Triangle104
| 2024-09-09T04:13:48Z | 6 | 0 | null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:mlabonne/FineTome-100k",
"dataset:arcee/agent_data",
"dataset:PawanKrd/math-gpt-4o-200k",
"dataset:cognitivecomputations/SystemChat-2.0",
"base_model:cognitivecomputations/dolphin-2.9.4-gemma2-2b",
"base_model:quantized:cognitivecomputations/dolphin-2.9.4-gemma2-2b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-09T04:13:35Z |
---
base_model: cognitivecomputations/dolphin-2.9.4-gemma2-2b
datasets:
- cognitivecomputations/Dolphin-2.9
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- mlabonne/FineTome-100k
- arcee/agent_data
- PawanKrd/math-gpt-4o-200k
- cognitivecomputations/SystemChat-2.0
license: gemma
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
---
# Triangle104/dolphin-2.9.4-gemma2-2b-Q8_0-GGUF
This model was converted to GGUF format from [`cognitivecomputations/dolphin-2.9.4-gemma2-2b`](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-gemma2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-gemma2-2b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q8_0-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q8_0-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q8_0-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q8_0-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q8_0.gguf -c 2048
```
|
nazhan/bge-large-en-v1.5-brahmaputra-iter-9-1-epoch
|
nazhan
| 2024-09-09T04:12:54Z | 7 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large-en-v1.5",
"model-index",
"region:us"
] |
text-classification
| 2024-09-09T04:12:28Z |
---
base_model: BAAI/bge-large-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Get me var Product_Profitability.
- text: What’s the best way to merge the Orders and Employees tables to identify the
top-performing departments?
- text: Please show min Total Company Revenue.
- text: Get me avg Intangible Assets.
- text: Can I join the Customers and Orders tables to find out which customers have
the highest lifetime value?
inference: true
model-index:
- name: SetFit with BAAI/bge-large-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.5726495726495726
name: Accuracy
---
# SetFit with BAAI/bge-large-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Generalreply | <ul><li>'How was your day today?'</li><li>'Oh, I have a lot of hobbies actually! But if I had to pick one, I would say that my favorite is probably reading. I love getting lost in a good book and discovering new worlds and characters. How about you?'</li><li>'Honestly, I hope to achieve a lot in the next 5 years. I want to continue growing in my career and learn new skills. I also aspire to travel more and experience different cultures. Overall, my goal is to be happy and fulfilled in both my personal and professional life. How about you? What are your hopes for the next 5 years?'</li></ul> |
| Lookup_1 | <ul><li>'i want to get trend analysis and group by product'</li><li>'Show me data_asset_001_pcc details.'</li><li>'Analyze Product-wise EBIT Margin Trend.'</li></ul> |
| Tablejoin | <ul><li>'Join data_asset_001_kpm with data_asset_kpi_is.'</li><li>'Can I merge cash flow and key performance metrics tables?'</li><li>'Join product category comparison and trend analysis tables.'</li></ul> |
| Rejection | <ul><li>"I'm not interested in filtering this collection."</li><li>"I don't want to create any new data outputs."</li><li>"I don't want to perform any filtering."</li></ul> |
| Aggregation | <ul><li>'Can I have avg Cost_Broadband?'</li><li>'Please show min % YoY Change.'</li><li>'Get me avg Earning_per_Cost.'</li></ul> |
| Viewtables | <ul><li>'What tables are included in the starhub_data_asset database that relate to customer complaints?'</li><li>'I need to see a list of tables that contain information about network outages.'</li><li>'What are the available tables in the starhub_data_asset database that are relevant to financial reporting?'</li></ul> |
| Lookup | <ul><li>'Filter by orders placed by customer ID 102 and get me the order dates.'</li><li>'Show me the orders placed on January 1st, 2024.'</li><li>"Get me the phone number of the customer with the first name 'Alice'."</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.5726 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("nazhan/bge-large-en-v1.5-brahmaputra-iter-9-1-epoch")
# Run inference
preds = model("Get me avg Intangible Assets.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 2 | 8.7792 | 62 |
| Label | Training Sample Count |
|:-------------|:----------------------|
| Tablejoin | 126 |
| Rejection | 72 |
| Aggregation | 221 |
| Lookup | 62 |
| Generalreply | 60 |
| Viewtables | 73 |
| Lookup_1 | 224 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2059 | - |
| 0.0014 | 50 | 0.1956 | - |
| 0.0028 | 100 | 0.207 | - |
| 0.0042 | 150 | 0.1783 | - |
| 0.0056 | 200 | 0.1517 | - |
| 0.0070 | 250 | 0.1795 | - |
| 0.0084 | 300 | 0.1227 | - |
| 0.0098 | 350 | 0.063 | - |
| 0.0112 | 400 | 0.0451 | - |
| 0.0126 | 450 | 0.0408 | - |
| 0.0140 | 500 | 0.0576 | - |
| 0.0155 | 550 | 0.0178 | - |
| 0.0169 | 600 | 0.0244 | - |
| 0.0183 | 650 | 0.0072 | - |
| 0.0197 | 700 | 0.0223 | - |
| 0.0211 | 750 | 0.0046 | - |
| 0.0225 | 800 | 0.003 | - |
| 0.0239 | 850 | 0.004 | - |
| 0.0253 | 900 | 0.0042 | - |
| 0.0267 | 950 | 0.0047 | - |
| 0.0281 | 1000 | 0.0045 | - |
| 0.0295 | 1050 | 0.0032 | - |
| 0.0309 | 1100 | 0.0021 | - |
| 0.0323 | 1150 | 0.0028 | - |
| 0.0337 | 1200 | 0.0022 | - |
| 0.0351 | 1250 | 0.0024 | - |
| 0.0365 | 1300 | 0.0019 | - |
| 0.0379 | 1350 | 0.002 | - |
| 0.0393 | 1400 | 0.0015 | - |
| 0.0407 | 1450 | 0.0016 | - |
| 0.0421 | 1500 | 0.0014 | - |
| 0.0436 | 1550 | 0.0013 | - |
| 0.0450 | 1600 | 0.0016 | - |
| 0.0464 | 1650 | 0.0011 | - |
| 0.0478 | 1700 | 0.0012 | - |
| 0.0492 | 1750 | 0.0011 | - |
| 0.0506 | 1800 | 0.0015 | - |
| 0.0520 | 1850 | 0.0016 | - |
| 0.0534 | 1900 | 0.0012 | - |
| 0.0548 | 1950 | 0.0008 | - |
| 0.0562 | 2000 | 0.0011 | - |
| 0.0576 | 2050 | 0.001 | - |
| 0.0590 | 2100 | 0.001 | - |
| 0.0604 | 2150 | 0.0008 | - |
| 0.0618 | 2200 | 0.0009 | - |
| 0.0632 | 2250 | 0.0007 | - |
| 0.0646 | 2300 | 0.0008 | - |
| 0.0660 | 2350 | 0.0006 | - |
| 0.0674 | 2400 | 0.0007 | - |
| 0.0688 | 2450 | 0.0008 | - |
| 0.0702 | 2500 | 0.0006 | - |
| 0.0717 | 2550 | 0.0007 | - |
| 0.0731 | 2600 | 0.0006 | - |
| 0.0745 | 2650 | 0.0007 | - |
| 0.0759 | 2700 | 0.0005 | - |
| 0.0773 | 2750 | 0.0006 | - |
| 0.0787 | 2800 | 0.0007 | - |
| 0.0801 | 2850 | 0.0007 | - |
| 0.0815 | 2900 | 0.0005 | - |
| 0.0829 | 2950 | 0.0008 | - |
| 0.0843 | 3000 | 0.0005 | - |
| 0.0857 | 3050 | 0.0007 | - |
| 0.0871 | 3100 | 0.0006 | - |
| 0.0885 | 3150 | 0.0005 | - |
| 0.0899 | 3200 | 0.0007 | - |
| 0.0913 | 3250 | 0.0005 | - |
| 0.0927 | 3300 | 0.0004 | - |
| 0.0941 | 3350 | 0.0005 | - |
| 0.0955 | 3400 | 0.0003 | - |
| 0.0969 | 3450 | 0.0004 | - |
| 0.0983 | 3500 | 0.0004 | - |
| 0.0998 | 3550 | 0.0004 | - |
| 0.1012 | 3600 | 0.0004 | - |
| 0.1026 | 3650 | 0.0004 | - |
| 0.1040 | 3700 | 0.0004 | - |
| 0.1054 | 3750 | 0.0004 | - |
| 0.1068 | 3800 | 0.0003 | - |
| 0.1082 | 3850 | 0.0003 | - |
| 0.1096 | 3900 | 0.0005 | - |
| 0.1110 | 3950 | 0.0005 | - |
| 0.1124 | 4000 | 0.0005 | - |
| 0.1138 | 4050 | 0.0003 | - |
| 0.1152 | 4100 | 0.0006 | - |
| 0.1166 | 4150 | 0.0004 | - |
| 0.1180 | 4200 | 0.0003 | - |
| 0.1194 | 4250 | 0.0004 | - |
| 0.1208 | 4300 | 0.0003 | - |
| 0.1222 | 4350 | 0.0004 | - |
| 0.1236 | 4400 | 0.0003 | - |
| 0.1250 | 4450 | 0.0003 | - |
| 0.1264 | 4500 | 0.0004 | - |
| 0.1279 | 4550 | 0.0003 | - |
| 0.1293 | 4600 | 0.0005 | - |
| 0.1307 | 4650 | 0.0004 | - |
| 0.1321 | 4700 | 0.0003 | - |
| 0.1335 | 4750 | 0.0004 | - |
| 0.1349 | 4800 | 0.0003 | - |
| 0.1363 | 4850 | 0.0003 | - |
| 0.1377 | 4900 | 0.0003 | - |
| 0.1391 | 4950 | 0.0003 | - |
| 0.1405 | 5000 | 0.0003 | - |
| 0.1419 | 5050 | 0.0003 | - |
| 0.1433 | 5100 | 0.0004 | - |
| 0.1447 | 5150 | 0.0003 | - |
| 0.1461 | 5200 | 0.0004 | - |
| 0.1475 | 5250 | 0.0004 | - |
| 0.1489 | 5300 | 0.0003 | - |
| 0.1503 | 5350 | 0.0003 | - |
| 0.1517 | 5400 | 0.0003 | - |
| 0.1531 | 5450 | 0.0003 | - |
| 0.1545 | 5500 | 0.0002 | - |
| 0.1560 | 5550 | 0.0003 | - |
| 0.1574 | 5600 | 0.0003 | - |
| 0.1588 | 5650 | 0.0003 | - |
| 0.1602 | 5700 | 0.0002 | - |
| 0.1616 | 5750 | 0.0002 | - |
| 0.1630 | 5800 | 0.0003 | - |
| 0.1644 | 5850 | 0.0002 | - |
| 0.1658 | 5900 | 0.0003 | - |
| 0.1672 | 5950 | 0.0002 | - |
| 0.1686 | 6000 | 0.0002 | - |
| 0.1700 | 6050 | 0.0002 | - |
| 0.1714 | 6100 | 0.0002 | - |
| 0.1728 | 6150 | 0.0003 | - |
| 0.1742 | 6200 | 0.0003 | - |
| 0.1756 | 6250 | 0.0003 | - |
| 0.1770 | 6300 | 0.0003 | - |
| 0.1784 | 6350 | 0.0002 | - |
| 0.1798 | 6400 | 0.0003 | - |
| 0.1812 | 6450 | 0.0002 | - |
| 0.1826 | 6500 | 0.0003 | - |
| 0.1841 | 6550 | 0.0002 | - |
| 0.1855 | 6600 | 0.0002 | - |
| 0.1869 | 6650 | 0.0002 | - |
| 0.1883 | 6700 | 0.0002 | - |
| 0.1897 | 6750 | 0.0003 | - |
| 0.1911 | 6800 | 0.0003 | - |
| 0.1925 | 6850 | 0.0002 | - |
| 0.1939 | 6900 | 0.0002 | - |
| 0.1953 | 6950 | 0.0002 | - |
| 0.1967 | 7000 | 0.0002 | - |
| 0.1981 | 7050 | 0.0001 | - |
| 0.1995 | 7100 | 0.0002 | - |
| 0.2009 | 7150 | 0.0002 | - |
| 0.2023 | 7200 | 0.0002 | - |
| 0.2037 | 7250 | 0.0002 | - |
| 0.2051 | 7300 | 0.0002 | - |
| 0.2065 | 7350 | 0.0001 | - |
| 0.2079 | 7400 | 0.0002 | - |
| 0.2093 | 7450 | 0.0024 | - |
| 0.2107 | 7500 | 0.0718 | - |
| 0.2122 | 7550 | 0.1 | - |
| 0.2136 | 7600 | 0.1876 | - |
| 0.2150 | 7650 | 0.1006 | - |
| 0.2164 | 7700 | 0.163 | - |
| 0.2178 | 7750 | 0.1008 | - |
| 0.2192 | 7800 | 0.1073 | - |
| 0.2206 | 7850 | 0.2059 | - |
| 0.2220 | 7900 | 0.112 | - |
| 0.2234 | 7950 | 0.1103 | - |
| 0.2248 | 8000 | 0.1921 | - |
| 0.2262 | 8050 | 0.0641 | - |
| 0.2276 | 8100 | 0.0992 | - |
| 0.2290 | 8150 | 0.2486 | - |
| 0.2304 | 8200 | 0.1716 | - |
| 0.2318 | 8250 | 0.142 | - |
| 0.2332 | 8300 | 0.1431 | - |
| 0.2346 | 8350 | 0.1774 | - |
| 0.2360 | 8400 | 0.1537 | - |
| 0.2374 | 8450 | 0.1902 | - |
| 0.2388 | 8500 | 0.1015 | - |
| 0.2402 | 8550 | 0.1401 | - |
| 0.2417 | 8600 | 0.2599 | - |
| 0.2431 | 8650 | 0.261 | - |
| 0.2445 | 8700 | 0.1861 | - |
| 0.2459 | 8750 | 0.1743 | - |
| 0.2473 | 8800 | 0.1705 | - |
| 0.2487 | 8850 | 0.1752 | - |
| 0.2501 | 8900 | 0.0914 | - |
| 0.2515 | 8950 | 0.1651 | - |
| 0.2529 | 9000 | 0.1165 | - |
| 0.2543 | 9050 | 0.2675 | - |
| 0.2557 | 9100 | 0.0953 | - |
| 0.2571 | 9150 | 0.0713 | - |
| 0.2585 | 9200 | 0.1782 | - |
| 0.2599 | 9250 | 0.1995 | - |
| 0.2613 | 9300 | 0.2393 | - |
| 0.2627 | 9350 | 0.1734 | - |
| 0.2641 | 9400 | 0.2222 | - |
| 0.2655 | 9450 | 0.3005 | - |
| 0.2669 | 9500 | 0.2252 | - |
| 0.2683 | 9550 | 0.2498 | - |
| 0.2698 | 9600 | 0.3293 | - |
| 0.2712 | 9650 | 0.2422 | - |
| 0.2726 | 9700 | 0.1943 | - |
| 0.2740 | 9750 | 0.2497 | - |
| 0.2754 | 9800 | 0.2538 | - |
| 0.2768 | 9850 | 0.2114 | - |
| 0.2782 | 9900 | 0.1719 | - |
| 0.2796 | 9950 | 0.2453 | - |
| 0.2810 | 10000 | 0.2571 | - |
| 0.2824 | 10050 | 0.2267 | - |
| 0.2838 | 10100 | 0.2274 | - |
| 0.2852 | 10150 | 0.2441 | - |
| 0.2866 | 10200 | 0.2536 | - |
| 0.2880 | 10250 | 0.236 | - |
| 0.2894 | 10300 | 0.204 | - |
| 0.2908 | 10350 | 0.2636 | - |
| 0.2922 | 10400 | 0.2562 | - |
| 0.2936 | 10450 | 0.2437 | - |
| 0.2950 | 10500 | 0.2395 | - |
| 0.2964 | 10550 | 0.2616 | - |
| 0.2979 | 10600 | 0.272 | - |
| 0.2993 | 10650 | 0.2637 | - |
| 0.3007 | 10700 | 0.2503 | - |
| 0.3021 | 10750 | 0.2401 | - |
| 0.3035 | 10800 | 0.2485 | - |
| 0.3049 | 10850 | 0.2521 | - |
| 0.3063 | 10900 | 0.256 | - |
| 0.3077 | 10950 | 0.2363 | - |
| 0.3091 | 11000 | 0.2482 | - |
| 0.3105 | 11050 | 0.2533 | - |
| 0.3119 | 11100 | 0.2598 | - |
| 0.3133 | 11150 | 0.2572 | - |
| 0.3147 | 11200 | 0.2631 | - |
| 0.3161 | 11250 | 0.2399 | - |
| 0.3175 | 11300 | 0.2509 | - |
| 0.3189 | 11350 | 0.2447 | - |
| 0.3203 | 11400 | 0.2395 | - |
| 0.3217 | 11450 | 0.2439 | - |
| 0.3231 | 11500 | 0.2497 | - |
| 0.3245 | 11550 | 0.2377 | - |
| 0.3260 | 11600 | 0.2452 | - |
| 0.3274 | 11650 | 0.2361 | - |
| 0.3288 | 11700 | 0.2431 | - |
| 0.3302 | 11750 | 0.2462 | - |
| 0.3316 | 11800 | 0.2438 | - |
| 0.3330 | 11850 | 0.2498 | - |
| 0.3344 | 11900 | 0.262 | - |
| 0.3358 | 11950 | 0.2451 | - |
| 0.3372 | 12000 | 0.251 | - |
| 0.3386 | 12050 | 0.2605 | - |
| 0.3400 | 12100 | 0.2477 | - |
| 0.3414 | 12150 | 0.2417 | - |
| 0.3428 | 12200 | 0.2566 | - |
| 0.3442 | 12250 | 0.2373 | - |
| 0.3456 | 12300 | 0.2444 | - |
| 0.3470 | 12350 | 0.2589 | - |
| 0.3484 | 12400 | 0.2491 | - |
| 0.3498 | 12450 | 0.2438 | - |
| 0.3512 | 12500 | 0.2519 | - |
| 0.3526 | 12550 | 0.2406 | - |
| 0.3541 | 12600 | 0.2472 | - |
| 0.3555 | 12650 | 0.2447 | - |
| 0.3569 | 12700 | 0.2677 | - |
| 0.3583 | 12750 | 0.2486 | - |
| 0.3597 | 12800 | 0.2585 | - |
| 0.3611 | 12850 | 0.2539 | - |
| 0.3625 | 12900 | 0.2556 | - |
| 0.3639 | 12950 | 0.2653 | - |
| 0.3653 | 13000 | 0.2583 | - |
| 0.3667 | 13050 | 0.2308 | - |
| 0.3681 | 13100 | 0.2586 | - |
| 0.3695 | 13150 | 0.2384 | - |
| 0.3709 | 13200 | 0.2645 | - |
| 0.3723 | 13250 | 0.2394 | - |
| 0.3737 | 13300 | 0.2575 | - |
| 0.3751 | 13350 | 0.2418 | - |
| 0.3765 | 13400 | 0.2414 | - |
| 0.3779 | 13450 | 0.2516 | - |
| 0.3793 | 13500 | 0.2571 | - |
| 0.3807 | 13550 | 0.2352 | - |
| 0.3822 | 13600 | 0.2584 | - |
| 0.3836 | 13650 | 0.2561 | - |
| 0.3850 | 13700 | 0.2672 | - |
| 0.3864 | 13750 | 0.2574 | - |
| 0.3878 | 13800 | 0.2398 | - |
| 0.3892 | 13850 | 0.2359 | - |
| 0.3906 | 13900 | 0.2397 | - |
| 0.3920 | 13950 | 0.2582 | - |
| 0.3934 | 14000 | 0.2468 | - |
| 0.3948 | 14050 | 0.2702 | - |
| 0.3962 | 14100 | 0.2547 | - |
| 0.3976 | 14150 | 0.2382 | - |
| 0.3990 | 14200 | 0.255 | - |
| 0.4004 | 14250 | 0.2382 | - |
| 0.4018 | 14300 | 0.2516 | - |
| 0.4032 | 14350 | 0.236 | - |
| 0.4046 | 14400 | 0.2499 | - |
| 0.4060 | 14450 | 0.2606 | - |
| 0.4074 | 14500 | 0.2514 | - |
| 0.4088 | 14550 | 0.2442 | - |
| 0.4103 | 14600 | 0.2516 | - |
| 0.4117 | 14650 | 0.2439 | - |
| 0.4131 | 14700 | 0.2547 | - |
| 0.4145 | 14750 | 0.2522 | - |
| 0.4159 | 14800 | 0.2421 | - |
| 0.4173 | 14850 | 0.2461 | - |
| 0.4187 | 14900 | 0.2663 | - |
| 0.4201 | 14950 | 0.259 | - |
| 0.4215 | 15000 | 0.2526 | - |
| 0.4229 | 15050 | 0.2527 | - |
| 0.4243 | 15100 | 0.2547 | - |
| 0.4257 | 15150 | 0.2696 | - |
| 0.4271 | 15200 | 0.2399 | - |
| 0.4285 | 15250 | 0.2557 | - |
| 0.4299 | 15300 | 0.2581 | - |
| 0.4313 | 15350 | 0.2402 | - |
| 0.4327 | 15400 | 0.2658 | - |
| 0.4341 | 15450 | 0.2491 | - |
| 0.4355 | 15500 | 0.2434 | - |
| 0.4369 | 15550 | 0.2511 | - |
| 0.4384 | 15600 | 0.2448 | - |
| 0.4398 | 15650 | 0.262 | - |
| 0.4412 | 15700 | 0.2549 | - |
| 0.4426 | 15750 | 0.2546 | - |
| 0.4440 | 15800 | 0.2444 | - |
| 0.4454 | 15850 | 0.2551 | - |
| 0.4468 | 15900 | 0.247 | - |
| 0.4482 | 15950 | 0.253 | - |
| 0.4496 | 16000 | 0.2615 | - |
| 0.4510 | 16050 | 0.2514 | - |
| 0.4524 | 16100 | 0.2587 | - |
| 0.4538 | 16150 | 0.2591 | - |
| 0.4552 | 16200 | 0.249 | - |
| 0.4566 | 16250 | 0.2459 | - |
| 0.4580 | 16300 | 0.2582 | - |
| 0.4594 | 16350 | 0.243 | - |
| 0.4608 | 16400 | 0.2493 | - |
| 0.4622 | 16450 | 0.2306 | - |
| 0.4636 | 16500 | 0.2561 | - |
| 0.4650 | 16550 | 0.2363 | - |
| 0.4664 | 16600 | 0.2412 | - |
| 0.4679 | 16650 | 0.2454 | - |
| 0.4693 | 16700 | 0.2575 | - |
| 0.4707 | 16750 | 0.2369 | - |
| 0.4721 | 16800 | 0.245 | - |
| 0.4735 | 16850 | 0.2591 | - |
| 0.4749 | 16900 | 0.2582 | - |
| 0.4763 | 16950 | 0.2629 | - |
| 0.4777 | 17000 | 0.2393 | - |
| 0.4791 | 17050 | 0.2563 | - |
| 0.4805 | 17100 | 0.2511 | - |
| 0.4819 | 17150 | 0.2538 | - |
| 0.4833 | 17200 | 0.2464 | - |
| 0.4847 | 17250 | 0.2511 | - |
| 0.4861 | 17300 | 0.244 | - |
| 0.4875 | 17350 | 0.2688 | - |
| 0.4889 | 17400 | 0.2729 | - |
| 0.4903 | 17450 | 0.2523 | - |
| 0.4917 | 17500 | 0.2507 | - |
| 0.4931 | 17550 | 0.2527 | - |
| 0.4945 | 17600 | 0.2478 | - |
| 0.4960 | 17650 | 0.26 | - |
| 0.4974 | 17700 | 0.2526 | - |
| 0.4988 | 17750 | 0.2549 | - |
| 0.5002 | 17800 | 0.2496 | - |
| 0.5016 | 17850 | 0.2537 | - |
| 0.5030 | 17900 | 0.2644 | - |
| 0.5044 | 17950 | 0.2633 | - |
| 0.5058 | 18000 | 0.2515 | - |
| 0.5072 | 18050 | 0.2551 | - |
| 0.5086 | 18100 | 0.2427 | - |
| 0.5100 | 18150 | 0.2615 | - |
| 0.5114 | 18200 | 0.2455 | - |
| 0.5128 | 18250 | 0.2615 | - |
| 0.5142 | 18300 | 0.2558 | - |
| 0.5156 | 18350 | 0.2483 | - |
| 0.5170 | 18400 | 0.2618 | - |
| 0.5184 | 18450 | 0.2404 | - |
| 0.5198 | 18500 | 0.2562 | - |
| 0.5212 | 18550 | 0.259 | - |
| 0.5226 | 18600 | 0.246 | - |
| 0.5241 | 18650 | 0.2529 | - |
| 0.5255 | 18700 | 0.2526 | - |
| 0.5269 | 18750 | 0.2381 | - |
| 0.5283 | 18800 | 0.2648 | - |
| 0.5297 | 18850 | 0.2628 | - |
| 0.5311 | 18900 | 0.2528 | - |
| 0.5325 | 18950 | 0.2447 | - |
| 0.5339 | 19000 | 0.2467 | - |
| 0.5353 | 19050 | 0.2487 | - |
| 0.5367 | 19100 | 0.2494 | - |
| 0.5381 | 19150 | 0.2441 | - |
| 0.5395 | 19200 | 0.2507 | - |
| 0.5409 | 19250 | 0.2494 | - |
| 0.5423 | 19300 | 0.2501 | - |
| 0.5437 | 19350 | 0.2586 | - |
| 0.5451 | 19400 | 0.2677 | - |
| 0.5465 | 19450 | 0.2558 | - |
| 0.5479 | 19500 | 0.2444 | - |
| 0.5493 | 19550 | 0.251 | - |
| 0.5507 | 19600 | 0.2545 | - |
| 0.5522 | 19650 | 0.2464 | - |
| 0.5536 | 19700 | 0.2565 | - |
| 0.5550 | 19750 | 0.2674 | - |
| 0.5564 | 19800 | 0.2483 | - |
| 0.5578 | 19850 | 0.241 | - |
| 0.5592 | 19900 | 0.2504 | - |
| 0.5606 | 19950 | 0.2655 | - |
| 0.5620 | 20000 | 0.2484 | - |
| 0.5634 | 20050 | 0.254 | - |
| 0.5648 | 20100 | 0.2482 | - |
| 0.5662 | 20150 | 0.2644 | - |
| 0.5676 | 20200 | 0.2694 | - |
| 0.5690 | 20250 | 0.258 | - |
| 0.5704 | 20300 | 0.2587 | - |
| 0.5718 | 20350 | 0.2571 | - |
| 0.5732 | 20400 | 0.2464 | - |
| 0.5746 | 20450 | 0.2531 | - |
| 0.5760 | 20500 | 0.2504 | - |
| 0.5774 | 20550 | 0.2551 | - |
| 0.5788 | 20600 | 0.253 | - |
| 0.5803 | 20650 | 0.2374 | - |
| 0.5817 | 20700 | 0.2405 | - |
| 0.5831 | 20750 | 0.2435 | - |
| 0.5845 | 20800 | 0.2569 | - |
| 0.5859 | 20850 | 0.2533 | - |
| 0.5873 | 20900 | 0.2508 | - |
| 0.5887 | 20950 | 0.2508 | - |
| 0.5901 | 21000 | 0.2531 | - |
| 0.5915 | 21050 | 0.2381 | - |
| 0.5929 | 21100 | 0.2009 | - |
| 0.5943 | 21150 | 0.0899 | - |
| 0.5957 | 21200 | 0.3046 | - |
| 0.5971 | 21250 | 0.2006 | - |
| 0.5985 | 21300 | 0.2289 | - |
| 0.5999 | 21350 | 0.1581 | - |
| 0.6013 | 21400 | 0.1769 | - |
| 0.6027 | 21450 | 0.2377 | - |
| 0.6041 | 21500 | 0.1988 | - |
| 0.6055 | 21550 | 0.2543 | - |
| 0.6069 | 21600 | 0.2517 | - |
| 0.6084 | 21650 | 0.2191 | - |
| 0.6098 | 21700 | 0.2803 | - |
| 0.6112 | 21750 | 0.2984 | - |
| 0.6126 | 21800 | 0.1915 | - |
| 0.6140 | 21850 | 0.189 | - |
| 0.6154 | 21900 | 0.1302 | - |
| 0.6168 | 21950 | 0.203 | - |
| 0.6182 | 22000 | 0.2038 | - |
| 0.6196 | 22050 | 0.134 | - |
| 0.6210 | 22100 | 0.1904 | - |
| 0.6224 | 22150 | 0.1477 | - |
| 0.6238 | 22200 | 0.1338 | - |
| 0.6252 | 22250 | 0.0709 | - |
| 0.6266 | 22300 | 0.0902 | - |
| 0.6280 | 22350 | 0.2025 | - |
| 0.6294 | 22400 | 0.0991 | - |
| 0.6308 | 22450 | 0.1321 | - |
| 0.6322 | 22500 | 0.1356 | - |
| 0.6336 | 22550 | 0.1682 | - |
| 0.6350 | 22600 | 0.2064 | - |
| 0.6365 | 22650 | 0.2 | - |
| 0.6379 | 22700 | 0.2105 | - |
| 0.6393 | 22750 | 0.2074 | - |
| 0.6407 | 22800 | 0.1901 | - |
| 0.6421 | 22850 | 0.1914 | - |
| 0.6435 | 22900 | 0.1831 | - |
| 0.6449 | 22950 | 0.1423 | - |
| 0.6463 | 23000 | 0.2502 | - |
| 0.6477 | 23050 | 0.1655 | - |
| 0.6491 | 23100 | 0.1585 | - |
| 0.6505 | 23150 | 0.2122 | - |
| 0.6519 | 23200 | 0.217 | - |
| 0.6533 | 23250 | 0.1704 | - |
| 0.6547 | 23300 | 0.189 | - |
| 0.6561 | 23350 | 0.1333 | - |
| 0.6575 | 23400 | 0.1863 | - |
| 0.6589 | 23450 | 0.2089 | - |
| 0.6603 | 23500 | 0.1261 | - |
| 0.6617 | 23550 | 0.1655 | - |
| 0.6631 | 23600 | 0.1721 | - |
| 0.6645 | 23650 | 0.083 | - |
| 0.6660 | 23700 | 0.1166 | - |
| 0.6674 | 23750 | 0.146 | - |
| 0.6688 | 23800 | 0.0423 | - |
| 0.6702 | 23850 | 0.1781 | - |
| 0.6716 | 23900 | 0.121 | - |
| 0.6730 | 23950 | 0.1624 | - |
| 0.6744 | 24000 | 0.1483 | - |
| 0.6758 | 24050 | 0.1479 | - |
| 0.6772 | 24100 | 0.2285 | - |
| 0.6786 | 24150 | 0.2084 | - |
| 0.6800 | 24200 | 0.12 | - |
| 0.6814 | 24250 | 0.115 | - |
| 0.6828 | 24300 | 0.1331 | - |
| 0.6842 | 24350 | 0.0971 | - |
| 0.6856 | 24400 | 0.0846 | - |
| 0.6870 | 24450 | 0.2254 | - |
| 0.6884 | 24500 | 0.1348 | - |
| 0.6898 | 24550 | 0.0633 | - |
| 0.6912 | 24600 | 0.1207 | - |
| 0.6926 | 24650 | 0.2109 | - |
| 0.6941 | 24700 | 0.0768 | - |
| 0.6955 | 24750 | 0.108 | - |
| 0.6969 | 24800 | 0.0665 | - |
| 0.6983 | 24850 | 0.0601 | - |
| 0.6997 | 24900 | 0.1922 | - |
| 0.7011 | 24950 | 0.1517 | - |
| 0.7025 | 25000 | 0.1049 | - |
| 0.7039 | 25050 | 0.1122 | - |
| 0.7053 | 25100 | 0.0973 | - |
| 0.7067 | 25150 | 0.1547 | - |
| 0.7081 | 25200 | 0.115 | - |
| 0.7095 | 25250 | 0.1881 | - |
| 0.7109 | 25300 | 0.2144 | - |
| 0.7123 | 25350 | 0.0567 | - |
| 0.7137 | 25400 | 0.0917 | - |
| 0.7151 | 25450 | 0.1404 | - |
| 0.7165 | 25500 | 0.019 | - |
| 0.7179 | 25550 | 0.1382 | - |
| 0.7193 | 25600 | 0.0727 | - |
| 0.7207 | 25650 | 0.1125 | - |
| 0.7222 | 25700 | 0.1133 | - |
| 0.7236 | 25750 | 0.0987 | - |
| 0.7250 | 25800 | 0.1915 | - |
| 0.7264 | 25850 | 0.09 | - |
| 0.7278 | 25900 | 0.1462 | - |
| 0.7292 | 25950 | 0.0881 | - |
| 0.7306 | 26000 | 0.1026 | - |
| 0.7320 | 26050 | 0.1079 | - |
| 0.7334 | 26100 | 0.1639 | - |
| 0.7348 | 26150 | 0.1229 | - |
| 0.7362 | 26200 | 0.3261 | - |
| 0.7376 | 26250 | 0.1426 | - |
| 0.7390 | 26300 | 0.0773 | - |
| 0.7404 | 26350 | 0.1607 | - |
| 0.7418 | 26400 | 0.1354 | - |
| 0.7432 | 26450 | 0.1512 | - |
| 0.7446 | 26500 | 0.1875 | - |
| 0.7460 | 26550 | 0.1403 | - |
| 0.7474 | 26600 | 0.1287 | - |
| 0.7488 | 26650 | 0.1892 | - |
| 0.7503 | 26700 | 0.166 | - |
| 0.7517 | 26750 | 0.2385 | - |
| 0.7531 | 26800 | 0.1445 | - |
| 0.7545 | 26850 | 0.0969 | - |
| 0.7559 | 26900 | 0.0948 | - |
| 0.7573 | 26950 | 0.0589 | - |
| 0.7587 | 27000 | 0.2326 | - |
| 0.7601 | 27050 | 0.1438 | - |
| 0.7615 | 27100 | 0.1032 | - |
| 0.7629 | 27150 | 0.0784 | - |
| 0.7643 | 27200 | 0.1478 | - |
| 0.7657 | 27250 | 0.1872 | - |
| 0.7671 | 27300 | 0.0672 | - |
| 0.7685 | 27350 | 0.0725 | - |
| 0.7699 | 27400 | 0.0771 | - |
| 0.7713 | 27450 | 0.2575 | - |
| 0.7727 | 27500 | 0.133 | - |
| 0.7741 | 27550 | 0.1222 | - |
| 0.7755 | 27600 | 0.1207 | - |
| 0.7769 | 27650 | 0.0973 | - |
| 0.7784 | 27700 | 0.2186 | - |
| 0.7798 | 27750 | 0.1648 | - |
| 0.7812 | 27800 | 0.1128 | - |
| 0.7826 | 27850 | 0.1626 | - |
| 0.7840 | 27900 | 0.1768 | - |
| 0.7854 | 27950 | 0.1806 | - |
| 0.7868 | 28000 | 0.1197 | - |
| 0.7882 | 28050 | 0.0472 | - |
| 0.7896 | 28100 | 0.1463 | - |
| 0.7910 | 28150 | 0.1707 | - |
| 0.7924 | 28200 | 0.0924 | - |
| 0.7938 | 28250 | 0.1708 | - |
| 0.7952 | 28300 | 0.1101 | - |
| 0.7966 | 28350 | 0.0867 | - |
| 0.7980 | 28400 | 0.1606 | - |
| 0.7994 | 28450 | 0.2422 | - |
| 0.8008 | 28500 | 0.1289 | - |
| 0.8022 | 28550 | 0.0513 | - |
| 0.8036 | 28600 | 0.1468 | - |
| 0.8050 | 28650 | 0.1742 | - |
| 0.8065 | 28700 | 0.0813 | - |
| 0.8079 | 28750 | 0.0916 | - |
| 0.8093 | 28800 | 0.0826 | - |
| 0.8107 | 28850 | 0.1457 | - |
| 0.8121 | 28900 | 0.0952 | - |
| 0.8135 | 28950 | 0.1376 | - |
| 0.8149 | 29000 | 0.06 | - |
| 0.8163 | 29050 | 0.1221 | - |
| 0.8177 | 29100 | 0.0713 | - |
| 0.8191 | 29150 | 0.1219 | - |
| 0.8205 | 29200 | 0.1051 | - |
| 0.8219 | 29250 | 0.1503 | - |
| 0.8233 | 29300 | 0.1128 | - |
| 0.8247 | 29350 | 0.0946 | - |
| 0.8261 | 29400 | 0.2115 | - |
| 0.8275 | 29450 | 0.1058 | - |
| 0.8289 | 29500 | 0.1085 | - |
| 0.8303 | 29550 | 0.1632 | - |
| 0.8317 | 29600 | 0.1022 | - |
| 0.8331 | 29650 | 0.136 | - |
| 0.8346 | 29700 | 0.1231 | - |
| 0.8360 | 29750 | 0.0929 | - |
| 0.8374 | 29800 | 0.1299 | - |
| 0.8388 | 29850 | 0.0693 | - |
| 0.8402 | 29900 | 0.0738 | - |
| 0.8416 | 29950 | 0.0826 | - |
| 0.8430 | 30000 | 0.1831 | - |
| 0.8444 | 30050 | 0.0962 | - |
| 0.8458 | 30100 | 0.0869 | - |
| 0.8472 | 30150 | 0.1459 | - |
| 0.8486 | 30200 | 0.1468 | - |
| 0.8500 | 30250 | 0.2132 | - |
| 0.8514 | 30300 | 0.1472 | - |
| 0.8528 | 30350 | 0.1294 | - |
| 0.8542 | 30400 | 0.0822 | - |
| 0.8556 | 30450 | 0.144 | - |
| 0.8570 | 30500 | 0.1216 | - |
| 0.8584 | 30550 | 0.1381 | - |
| 0.8598 | 30600 | 0.1612 | - |
| 0.8612 | 30650 | 0.1665 | - |
| 0.8627 | 30700 | 0.2035 | - |
| 0.8641 | 30750 | 0.136 | - |
| 0.8655 | 30800 | 0.1685 | - |
| 0.8669 | 30850 | 0.1421 | - |
| 0.8683 | 30900 | 0.1169 | - |
| 0.8697 | 30950 | 0.1799 | - |
| 0.8711 | 31000 | 0.2185 | - |
| 0.8725 | 31050 | 0.1321 | - |
| 0.8739 | 31100 | 0.145 | - |
| 0.8753 | 31150 | 0.1848 | - |
| 0.8767 | 31200 | 0.2173 | - |
| 0.8781 | 31250 | 0.2036 | - |
| 0.8795 | 31300 | 0.2056 | - |
| 0.8809 | 31350 | 0.312 | - |
| 0.8823 | 31400 | 0.2119 | - |
| 0.8837 | 31450 | 0.1875 | - |
| 0.8851 | 31500 | 0.2216 | - |
| 0.8865 | 31550 | 0.2267 | - |
| 0.8879 | 31600 | 0.2709 | - |
| 0.8893 | 31650 | 0.1868 | - |
| 0.8907 | 31700 | 0.1752 | - |
| 0.8922 | 31750 | 0.2468 | - |
| 0.8936 | 31800 | 0.1632 | - |
| 0.8950 | 31850 | 0.2483 | - |
| 0.8964 | 31900 | 0.1597 | - |
| 0.8978 | 31950 | 0.1587 | - |
| 0.8992 | 32000 | 0.0897 | - |
| 0.9006 | 32050 | 0.0764 | - |
| 0.9020 | 32100 | 0.1798 | - |
| 0.9034 | 32150 | 0.1254 | - |
| 0.9048 | 32200 | 0.1905 | - |
| 0.9062 | 32250 | 0.0714 | - |
| 0.9076 | 32300 | 0.1377 | - |
| 0.9090 | 32350 | 0.0192 | - |
| 0.9104 | 32400 | 0.1208 | - |
| 0.9118 | 32450 | 0.239 | - |
| 0.9132 | 32500 | 0.0965 | - |
| 0.9146 | 32550 | 0.1189 | - |
| 0.9160 | 32600 | 0.0856 | - |
| 0.9174 | 32650 | 0.1041 | - |
| 0.9188 | 32700 | 0.1107 | - |
| 0.9203 | 32750 | 0.1499 | - |
| 0.9217 | 32800 | 0.0874 | - |
| 0.9231 | 32850 | 0.1255 | - |
| 0.9245 | 32900 | 0.1099 | - |
| 0.9259 | 32950 | 0.1806 | - |
| 0.9273 | 33000 | 0.0544 | - |
| 0.9287 | 33050 | 0.0504 | - |
| 0.9301 | 33100 | 0.2441 | - |
| 0.9315 | 33150 | 0.0266 | - |
| 0.9329 | 33200 | 0.0985 | - |
| 0.9343 | 33250 | 0.0923 | - |
| 0.9357 | 33300 | 0.1054 | - |
| 0.9371 | 33350 | 0.0625 | - |
| 0.9385 | 33400 | 0.0882 | - |
| 0.9399 | 33450 | 0.102 | - |
| 0.9413 | 33500 | 0.108 | - |
| 0.9427 | 33550 | 0.135 | - |
| 0.9441 | 33600 | 0.1016 | - |
| 0.9455 | 33650 | 0.2008 | - |
| 0.9469 | 33700 | 0.0591 | - |
| 0.9484 | 33750 | 0.1922 | - |
| 0.9498 | 33800 | 0.1045 | - |
| 0.9512 | 33850 | 0.102 | - |
| 0.9526 | 33900 | 0.0634 | - |
| 0.9540 | 33950 | 0.0668 | - |
| 0.9554 | 34000 | 0.1339 | - |
| 0.9568 | 34050 | 0.0599 | - |
| 0.9582 | 34100 | 0.0623 | - |
| 0.9596 | 34150 | 0.1133 | - |
| 0.9610 | 34200 | 0.1218 | - |
| 0.9624 | 34250 | 0.0618 | - |
| 0.9638 | 34300 | 0.1062 | - |
| 0.9652 | 34350 | 0.0909 | - |
| 0.9666 | 34400 | 0.0885 | - |
| 0.9680 | 34450 | 0.1461 | - |
| 0.9694 | 34500 | 0.0254 | - |
| 0.9708 | 34550 | 0.0697 | - |
| 0.9722 | 34600 | 0.016 | - |
| 0.9736 | 34650 | 0.1524 | - |
| 0.9750 | 34700 | 0.1468 | - |
| 0.9765 | 34750 | 0.1497 | - |
| 0.9779 | 34800 | 0.0785 | - |
| 0.9793 | 34850 | 0.0645 | - |
| 0.9807 | 34900 | 0.1357 | - |
| 0.9821 | 34950 | 0.1469 | - |
| 0.9835 | 35000 | 0.2356 | - |
| 0.9849 | 35050 | 0.018 | - |
| 0.9863 | 35100 | 0.1534 | - |
| 0.9877 | 35150 | 0.14 | - |
| 0.9891 | 35200 | 0.1001 | - |
| 0.9905 | 35250 | 0.0614 | - |
| 0.9919 | 35300 | 0.1407 | - |
| 0.9933 | 35350 | 0.1104 | - |
| 0.9947 | 35400 | 0.1477 | - |
| 0.9961 | 35450 | 0.1279 | - |
| 0.9975 | 35500 | 0.0957 | - |
| 0.9989 | 35550 | 0.0579 | - |
| **1.0** | **35588** | **-** | **0.1207** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.9
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.4.0+cu121
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Triangle104/dolphin-2.9.4-gemma2-2b-Q6_K-GGUF
|
Triangle104
| 2024-09-09T04:12:19Z | 11 | 0 | null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:mlabonne/FineTome-100k",
"dataset:arcee/agent_data",
"dataset:PawanKrd/math-gpt-4o-200k",
"dataset:cognitivecomputations/SystemChat-2.0",
"base_model:cognitivecomputations/dolphin-2.9.4-gemma2-2b",
"base_model:quantized:cognitivecomputations/dolphin-2.9.4-gemma2-2b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-09T04:12:09Z |
---
base_model: cognitivecomputations/dolphin-2.9.4-gemma2-2b
datasets:
- cognitivecomputations/Dolphin-2.9
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- mlabonne/FineTome-100k
- arcee/agent_data
- PawanKrd/math-gpt-4o-200k
- cognitivecomputations/SystemChat-2.0
license: gemma
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
---
# Triangle104/dolphin-2.9.4-gemma2-2b-Q6_K-GGUF
This model was converted to GGUF format from [`cognitivecomputations/dolphin-2.9.4-gemma2-2b`](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-gemma2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-gemma2-2b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q6_K-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q6_K-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q6_K-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q6_K-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q6_k.gguf -c 2048
```
|
Triangle104/dolphin-2.9.4-gemma2-2b-Q5_K_M-GGUF
|
Triangle104
| 2024-09-09T04:11:15Z | 5 | 0 | null |
[
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:mlabonne/FineTome-100k",
"dataset:arcee/agent_data",
"dataset:PawanKrd/math-gpt-4o-200k",
"dataset:cognitivecomputations/SystemChat-2.0",
"base_model:cognitivecomputations/dolphin-2.9.4-gemma2-2b",
"base_model:quantized:cognitivecomputations/dolphin-2.9.4-gemma2-2b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-09T04:11:04Z |
---
base_model: cognitivecomputations/dolphin-2.9.4-gemma2-2b
datasets:
- cognitivecomputations/Dolphin-2.9
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- mlabonne/FineTome-100k
- arcee/agent_data
- PawanKrd/math-gpt-4o-200k
- cognitivecomputations/SystemChat-2.0
license: gemma
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
---
# Triangle104/dolphin-2.9.4-gemma2-2b-Q5_K_M-GGUF
This model was converted to GGUF format from [`cognitivecomputations/dolphin-2.9.4-gemma2-2b`](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-gemma2-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.4-gemma2-2b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q5_K_M-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q5_K_M-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q5_K_M-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/dolphin-2.9.4-gemma2-2b-Q5_K_M-GGUF --hf-file dolphin-2.9.4-gemma2-2b-q5_k_m.gguf -c 2048
```
|
jvelja/BERT_vllm-gemma2b-deterministic_4
|
jvelja
| 2024-09-09T04:00:41Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T04:00:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lephatminh/finetuning-sentiment-model-3000-samples
|
lephatminh
| 2024-09-09T03:58:41Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-07T17:27:06Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3611
- Accuracy: 0.8633
- F1: 0.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Imask/BART_1000samples
|
Imask
| 2024-09-09T03:41:49Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-09-09T03:40:03Z |
---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7617
- Rouge1: 61.3423
- Rouge2: 48.0884
- Rougel: 49.1785
- Rougelsum: 59.1549
- Gen Len: 79.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 2
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Wuwani/queue_detection_cctv
|
Wuwani
| 2024-09-09T03:40:29Z | 133 | 0 |
transformers
|
[
"transformers",
"safetensors",
"conditional_detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-09-09T03:39:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PPSharks/cyber-rune-logo-flux
|
PPSharks
| 2024-09-09T03:30:13Z | 10 | 0 |
diffusers
|
[
"diffusers",
"art",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] |
text-to-image
| 2024-09-09T02:51:53Z |
---
license: mit
language:
- en
base_model: black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
tags:
- art
---
# Model Introduction
Used to generate technology-style logos, this version enhances the style.
In addition to the technology-style simple lines, it is also suitable for generating decorative elements such as garbled codes and symbols.
# Recommended Parameters and Samples

A logo that says "ChatDesigner", technology style, simple lines,
Negative prompt: ng_deepnegative_v1_75t, (badhandv4:1.2), EasyNegative, (worst quality:2),
Steps: 20, Size: 1024x1024, Seed: 3017343388, Model: F.1 basic algorithm model - Li Bu Online runnable, 20240904-1725427531468-0005, Sampler: 1, CFG scale: 3.5
# Tips
This model follows the Flux-1-dev-non-commercial-license agreement and is for exchange and learning purposes only, not for commercial use.
The text content generated by you based on this model service is maintained by you and used after your independent judgment. You must use it in compliance with applicable laws and regulations and the terms of this service. You are responsible for any intellectual property issues arising from the generated content, and you are responsible for the generated content. I am not responsible for any losses caused by this. Unless otherwise provided by laws and regulations, no one may integrate this model service into any third-party software or service, nor disclose the relevance between the generated content and this service to any third party in any direct or indirect, explicit or implicit manner. No one may use the model for commercial purposes (including but not limited to text generation, image generation, face swapping, etc.). You shall bear the legal consequences of violating this agreement, and the losses caused to the author shall be recovered by the author.
|
ichigoDifuku/segformer-b0-finetuned-segments-sidewalk-2
|
ichigoDifuku
| 2024-09-09T03:18:34Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"region:us"
] |
image-segmentation
| 2024-09-08T07:18:55Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5973
- Mean Iou: 0.2741
- Mean Accuracy: 0.3241
- Overall Accuracy: 0.8431
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.8917
- Accuracy Flat-sidewalk: 0.9574
- Accuracy Flat-crosswalk: 0.5943
- Accuracy Flat-cyclinglane: 0.7718
- Accuracy Flat-parkingdriveway: 0.4461
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.4790
- Accuracy Human-person: 0.6275
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.9331
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.3002
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8853
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.5059
- Accuracy Construction-fenceguardrail: 0.1500
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.1196
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.9324
- Accuracy Nature-terrain: 0.8845
- Accuracy Sky: 0.9657
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.2504
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.7433
- Iou Flat-sidewalk: 0.8594
- Iou Flat-crosswalk: 0.5654
- Iou Flat-cyclinglane: 0.7149
- Iou Flat-parkingdriveway: 0.3468
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.3889
- Iou Human-person: 0.4411
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.8003
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.2484
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.7020
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.3589
- Iou Construction-fenceguardrail: 0.1472
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.1109
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.8106
- Iou Nature-terrain: 0.7138
- Iou Sky: 0.8976
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.1950
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 1.611 | 1.0 | 100 | 1.3435 | 0.1511 | 0.1998 | 0.7326 | nan | 0.8304 | 0.8942 | 0.0 | 0.4356 | 0.0015 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9156 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8185 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.8566 | 0.9324 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4820 | 0.7633 | 0.0 | 0.4233 | 0.0015 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5840 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5841 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7275 | 0.5762 | 0.8456 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.137 | 2.0 | 200 | 1.0101 | 0.1634 | 0.2060 | 0.7605 | nan | 0.8276 | 0.9403 | 0.0 | 0.5388 | 0.0501 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.9244 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8842 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9359 | 0.7548 | 0.9394 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5394 | 0.7902 | 0.0 | 0.5030 | 0.0473 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.6562 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5962 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7557 | 0.6370 | 0.8647 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9072 | 3.0 | 300 | 0.8578 | 0.1838 | 0.2268 | 0.7835 | nan | 0.8225 | 0.9528 | 0.0 | 0.5973 | 0.2935 | 0.0 | 0.2630 | 0.0025 | 0.0 | 0.9198 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8787 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9360 | 0.8653 | 0.9461 | 0.0 | 0.0 | 0.0001 | 0.0 | nan | 0.6292 | 0.8005 | 0.0 | 0.5630 | 0.2090 | 0.0 | 0.2160 | 0.0025 | 0.0 | 0.6917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6191 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7644 | 0.6802 | 0.8816 | 0.0 | 0.0 | 0.0001 | 0.0 |
| 0.8185 | 4.0 | 400 | 0.7882 | 0.1989 | 0.2415 | 0.7956 | nan | 0.7952 | 0.9597 | 0.0 | 0.7546 | 0.2634 | 0.0 | 0.3463 | 0.0840 | 0.0 | 0.9316 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0041 | 0.0 | 0.0 | 0.8908 | 0.0 | 0.1958 | 0.0 | 0.0 | nan | 0.0 | 0.0022 | 0.0 | 0.0 | 0.9252 | 0.8699 | 0.9444 | 0.0 | 0.0 | 0.0031 | 0.0 | nan | 0.6240 | 0.8069 | 0.0 | 0.6760 | 0.2169 | 0.0 | 0.2836 | 0.0834 | 0.0 | 0.6913 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0041 | 0.0 | 0.0 | 0.6411 | 0.0 | 0.1802 | 0.0 | 0.0 | nan | 0.0 | 0.0022 | 0.0 | 0.0 | 0.7785 | 0.6854 | 0.8880 | 0.0 | 0.0 | 0.0030 | 0.0 |
| 0.7126 | 5.0 | 500 | 0.6955 | 0.2183 | 0.2633 | 0.8120 | nan | 0.8911 | 0.9470 | 0.0 | 0.7170 | 0.4330 | 0.0 | 0.3882 | 0.3402 | 0.0 | 0.9353 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0431 | 0.0 | 0.0 | 0.9055 | 0.0 | 0.3163 | 0.0001 | 0.0 | nan | 0.0 | 0.0169 | 0.0 | 0.0 | 0.9358 | 0.8452 | 0.9570 | 0.0 | 0.0 | 0.0175 | 0.0 | nan | 0.6663 | 0.8443 | 0.0 | 0.6712 | 0.3070 | 0.0 | 0.3186 | 0.2908 | 0.0 | 0.7295 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0426 | 0.0 | 0.0 | 0.6535 | 0.0 | 0.2567 | 0.0001 | 0.0 | nan | 0.0 | 0.0168 | 0.0 | 0.0 | 0.7948 | 0.7024 | 0.8913 | 0.0 | 0.0 | 0.0169 | 0.0 |
| 0.6301 | 6.0 | 600 | 0.6489 | 0.2343 | 0.2820 | 0.8200 | nan | 0.8983 | 0.9516 | 0.0367 | 0.6998 | 0.4508 | 0.0 | 0.4324 | 0.4498 | 0.0 | 0.9273 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1090 | 0.0 | 0.0 | 0.8946 | 0.0 | 0.4651 | 0.0242 | 0.0 | nan | 0.0 | 0.0529 | 0.0 | 0.0 | 0.9303 | 0.8783 | 0.9598 | 0.0 | 0.0 | 0.1467 | 0.0 | nan | 0.6706 | 0.8600 | 0.0366 | 0.6645 | 0.3249 | 0.0 | 0.3476 | 0.3571 | 0.0 | 0.7710 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1062 | 0.0 | 0.0 | 0.6826 | 0.0 | 0.3357 | 0.0242 | 0.0 | nan | 0.0 | 0.0517 | 0.0 | 0.0 | 0.7885 | 0.6900 | 0.8949 | 0.0 | 0.0 | 0.1269 | 0.0 |
| 0.6233 | 7.0 | 700 | 0.6114 | 0.2574 | 0.3056 | 0.8362 | nan | 0.9201 | 0.9481 | 0.4433 | 0.7484 | 0.4279 | 0.0 | 0.4697 | 0.5517 | 0.0 | 0.9392 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1717 | 0.0 | 0.0 | 0.9109 | 0.0 | 0.4857 | 0.0654 | 0.0 | nan | 0.0 | 0.0839 | 0.0 | 0.0 | 0.9306 | 0.8631 | 0.9602 | 0.0 | 0.0 | 0.1655 | 0.0 | nan | 0.7205 | 0.8672 | 0.4324 | 0.7056 | 0.3159 | 0.0 | 0.3665 | 0.4028 | 0.0 | 0.7768 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1553 | 0.0 | 0.0 | 0.6845 | 0.0 | 0.3553 | 0.0652 | 0.0 | nan | 0.0 | 0.0801 | 0.0 | 0.0 | 0.8115 | 0.7151 | 0.8973 | 0.0 | 0.0 | 0.1422 | 0.0 |
| 0.5976 | 8.0 | 800 | 0.6079 | 0.2675 | 0.3160 | 0.8399 | nan | 0.8925 | 0.9563 | 0.5797 | 0.7454 | 0.4506 | 0.0 | 0.4743 | 0.5701 | 0.0 | 0.9310 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2390 | 0.0 | 0.0 | 0.8996 | 0.0 | 0.5096 | 0.1015 | 0.0 | nan | 0.0 | 0.0911 | 0.0 | 0.0 | 0.9289 | 0.8862 | 0.9600 | 0.0 | 0.0 | 0.2112 | 0.0 | nan | 0.7371 | 0.8592 | 0.5545 | 0.7028 | 0.3314 | 0.0 | 0.3883 | 0.4237 | 0.0 | 0.7920 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2045 | 0.0 | 0.0 | 0.6950 | 0.0 | 0.3606 | 0.1006 | 0.0 | nan | 0.0 | 0.0868 | 0.0 | 0.0 | 0.8094 | 0.7093 | 0.8986 | 0.0 | 0.0 | 0.1743 | 0.0 |
| 0.5346 | 9.0 | 900 | 0.5961 | 0.2715 | 0.3233 | 0.8428 | nan | 0.9151 | 0.9464 | 0.5468 | 0.7937 | 0.4682 | 0.0 | 0.4628 | 0.6180 | 0.0 | 0.9350 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2914 | 0.0 | 0.0 | 0.8768 | 0.0 | 0.5235 | 0.1304 | 0.0 | nan | 0.0 | 0.1123 | 0.0 | 0.0 | 0.9369 | 0.8892 | 0.9621 | 0.0 | 0.0 | 0.2611 | 0.0 | nan | 0.7454 | 0.8704 | 0.5242 | 0.7242 | 0.3413 | 0.0 | 0.3770 | 0.4351 | 0.0 | 0.7947 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2413 | 0.0 | 0.0 | 0.7017 | 0.0 | 0.3608 | 0.1287 | 0.0 | nan | 0.0 | 0.1054 | 0.0 | 0.0 | 0.8056 | 0.7052 | 0.8994 | 0.0 | 0.0 | 0.1976 | 0.0 |
| 0.5377 | 10.0 | 1000 | 0.5973 | 0.2741 | 0.3241 | 0.8431 | nan | 0.8917 | 0.9574 | 0.5943 | 0.7718 | 0.4461 | 0.0 | 0.4790 | 0.6275 | 0.0 | 0.9331 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3002 | 0.0 | 0.0 | 0.8853 | 0.0 | 0.5059 | 0.1500 | 0.0 | nan | 0.0 | 0.1196 | 0.0 | 0.0 | 0.9324 | 0.8845 | 0.9657 | 0.0 | 0.0 | 0.2504 | 0.0 | nan | 0.7433 | 0.8594 | 0.5654 | 0.7149 | 0.3468 | 0.0 | 0.3889 | 0.4411 | 0.0 | 0.8003 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2484 | 0.0 | 0.0 | 0.7020 | 0.0 | 0.3589 | 0.1472 | 0.0 | nan | 0.0 | 0.1109 | 0.0 | 0.0 | 0.8106 | 0.7138 | 0.8976 | 0.0 | 0.0 | 0.1950 | 0.0 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
|
novalalthoff/wav2vec2-large-xlsr-53-id-common-voice-50
|
novalalthoff
| 2024-09-09T03:18:30Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-09T03:17:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF
|
legraphista
| 2024-09-09T03:10:27Z | 191 | 2 |
gguf
|
[
"gguf",
"quantized",
"GGUF",
"quantization",
"imat",
"imatrix",
"static",
"8bit",
"6bit",
"5bit",
"4bit",
"3bit",
"2bit",
"1bit",
"text-generation",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Instruct-0724",
"base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Instruct-0724",
"license:other",
"region:us",
"conversational"
] |
text-generation
| 2024-09-08T03:59:08Z |
---
base_model: deepseek-ai/DeepSeek-Coder-V2-Instruct-0724
inference: false
library_name: gguf
license: other
license_link: LICENSE
license_name: deepseek-license
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- quantization
- imat
- imatrix
- static
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF
_Llama.cpp imatrix quantization of deepseek-ai/DeepSeek-Coder-V2-Instruct-0724_
Original Model: [deepseek-ai/DeepSeek-Coder-V2-Instruct-0724](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct-0724)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3687](https://github.com/ggerganov/llama.cpp/releases/tag/b3687)
IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [DeepSeek-Coder-V2-Instruct-0724.Q8_0/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q8_0) | Q8_0 | 250.62GB | ✅ Available | ⚪ Static | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q6_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q6_K) | Q6_K | 193.54GB | ✅ Available | ⚪ Static | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q4_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q4_K) | Q4_K | 142.45GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q3_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q3_K) | Q3_K | 112.67GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q2_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q2_K) | Q2_K | 85.95GB | ✅ Available | 🟢 IMatrix | ✂ Yes
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [DeepSeek-Coder-V2-Instruct-0724.Q8_0/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q8_0) | Q8_0 | 250.62GB | ✅ Available | ⚪ Static | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q6_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q6_K) | Q6_K | 193.54GB | ✅ Available | ⚪ Static | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q5_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q5_K) | Q5_K | 167.22GB | ✅ Available | ⚪ Static | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q5_K_S/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q5_K_S) | Q5_K_S | 162.31GB | ✅ Available | ⚪ Static | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q4_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q4_K) | Q4_K | 142.45GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q4_K_S/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q4_K_S) | Q4_K_S | 133.88GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ4_NL/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ4_NL) | IQ4_NL | 132.91GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ4_XS/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ4_XS) | IQ4_XS | 125.56GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q3_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q3_K) | Q3_K | 112.67GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q3_K_L/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q3_K_L) | Q3_K_L | 122.37GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q3_K_S/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q3_K_S) | Q3_K_S | 101.68GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ3_M/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ3_M) | IQ3_M | 103.37GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ3_S/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ3_S) | IQ3_S | 101.68GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ3_XS/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ3_XS) | IQ3_XS | 96.30GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ3_XXS/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ3_XXS) | IQ3_XXS | 90.85GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q2_K/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q2_K) | Q2_K | 85.95GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.Q2_K_S/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.Q2_K_S) | Q2_K_S | 79.60GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ2_M/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ2_M) | IQ2_M | 76.92GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ2_S/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ2_S) | IQ2_S | 69.87GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ2_XS/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ2_XS) | IQ2_XS | 68.71GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ2_XXS/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ2_XXS) | IQ2_XXS | 61.50GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ1_M/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ1_M) | IQ1_M | 52.68GB | ✅ Available | 🟢 IMatrix | ✂ Yes
| [DeepSeek-Coder-V2-Instruct-0724.IQ1_S/*](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-0724.IQ1_S) | IQ1_S | 47.39GB | ✅ Available | 🟢 IMatrix | ✂ Yes
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF --include "DeepSeek-Coder-V2-Instruct-0724.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/DeepSeek-Coder-V2-Instruct-0724-IMat-GGUF --include "DeepSeek-Coder-V2-Instruct-0724.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
<|begin▁of▁sentence|><|User|>{user_prompt}<|Assistant|>{assistant_response}<|end▁of▁sentence|><|User|>{next_user_prompt}
```
### Chat template with system prompt
```
<|begin▁of▁sentence|>{system_prompt}<|User|>{user_prompt}<|Assistant|>{assistant_response}<|end▁of▁sentence|><|User|>{next_user_prompt}
```
### Llama.cpp
```
llama.cpp/main -m DeepSeek-Coder-V2-Instruct-0724.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `DeepSeek-Coder-V2-Instruct-0724.Q8_0`)
3. Run `gguf-split --merge DeepSeek-Coder-V2-Instruct-0724.Q8_0/DeepSeek-Coder-V2-Instruct-0724.Q8_0-00001-of-XXXXX.gguf DeepSeek-Coder-V2-Instruct-0724.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
|
jvelja/vllm-gemma2b-deterministic_3
|
jvelja
| 2024-09-09T03:07:47Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-09-09T03:07:45Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmptii_au_j/jvelja/vllm-gemma2b-deterministic_3")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmptii_au_j/jvelja/vllm-gemma2b-deterministic_3")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmptii_au_j/jvelja/vllm-gemma2b-deterministic_3")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
jvelja/BERT_vllm-gemma2b-deterministic_3
|
jvelja
| 2024-09-09T03:07:44Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T03:07:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darkc0de/Xortron7MethedUp-pass3headGOAT
|
darkc0de
| 2024-09-09T03:05:04Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mlabonne/Hermes-3-Llama-3.1-8B-lorablated",
"base_model:merge:mlabonne/Hermes-3-Llama-3.1-8B-lorablated",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored",
"base_model:merge:w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-09T02:57:40Z |
---
base_model:
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
- mlabonne/NeuralDaredevil-8B-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated)
* [w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored](https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored)
* [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
layer_range: [0, 32]
- sources:
- model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
layer_range: [0, 32]
- sources:
- model: mlabonne/NeuralDaredevil-8B-abliterated
layer_range: [0, 32]
merge_method: passthrough
dtype: bfloat16
```
|
Hanhpt23/Clip-Soil
|
Hanhpt23
| 2024-09-09T03:03:24Z | 5 | 0 | null |
[
"safetensors",
"clip",
"generated_from_trainer",
"base_model:openai/clip-vit-large-patch14-336",
"base_model:finetune:openai/clip-vit-large-patch14-336",
"region:us"
] | null | 2024-09-08T22:04:32Z |
---
base_model: openai/clip-vit-large-patch14-336
tags:
- generated_from_trainer
model-index:
- name: Clip-Soil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clip-Soil
This model is a fine-tuned version of [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.21.0
- Tokenizers 0.19.1
|
jvelja/vllm-gemma2b_3
|
jvelja
| 2024-09-09T03:01:38Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-09-09T03:01:36Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmpx8fd_qt3/jvelja/vllm-gemma2b_3")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpx8fd_qt3/jvelja/vllm-gemma2b_3")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpx8fd_qt3/jvelja/vllm-gemma2b_3")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
jvelja/BERT_vllm-gemma2b_3
|
jvelja
| 2024-09-09T03:01:35Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T03:01:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/gemma-2-9b-it-WPO-HB-GGUF
|
QuantFactory
| 2024-09-09T02:56:23Z | 125 | 2 |
transformers
|
[
"transformers",
"gguf",
"alignment-handbook",
"gemma",
"dataset:wzhouad/gemma-2-ultrafeedback-hybrid",
"arxiv:2406.11827",
"arxiv:2310.01377",
"arxiv:2406.12845",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-09T02:04:05Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
datasets:
- wzhouad/gemma-2-ultrafeedback-hybrid
tags:
- alignment-handbook
- gemma
---

# QuantFactory/gemma-2-9b-it-WPO-HB-GGUF
This is quantized version of [wzhouad/gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB) created using llama.cpp
# Original Model Card
We propose a novel strategy to enhance off-policy preference optimization by simulating on-policy learning with off-policy preference data. Our Weighted Preference Optimization (WPO) method adapts off-policy data to resemble on-policy data more closely by reweighting preference pairs according to their probability under the current policy. This method not only addresses the distributional gap problem but also enhances the optimization process without incurring additional costs. Refer to our [preprint](https://arxiv.org/abs/2406.11827) and [repo](https://github.com/wzhouad/WPO) for details.
## Model Description
### Data
gemma-2-9b-it finetuned by hybrid WPO, utilizing two types of data:
1. On-policy sampled gemma outputs based on Ultrafeedback prompts.
2. GPT-4-turbo outputs based on Ultrafeedback prompts.
In comparison to the preference data construction method in our paper, we switch to RLHFlow/ArmoRM-Llama3-8B-v0.1 to score the outputs, and choose the outputs with maximum/minimum scores to form a preference pair.
We provide our training data at [wzhouad/gemma-2-ultrafeedback-hybrid](https://huggingface.co/datasets/wzhouad/gemma-2-ultrafeedback-hybrid).
### [AlpacaEval Eval Results](https://tatsu-lab.github.io/alpaca_eval/)
| Model | LC | WR | Avg. Length |
|-------------------------------------------|:------------:|:--------:|:-----------:|
|[gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB) |76.73 | 77.83 | 2285
### Link to Other WPO Models
Check our [WPO Collection](https://huggingface.co/collections/wzhouad/wpo-66a04e4f552c0be180da2931).
### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- beta: 0.01
- per_device_train_batch_size: 1
- gradient_accumulation_steps: 16
- seed: 1
- num_devices: 8
- optim: adamw_torch
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_train_epochs: 2.0
- max_length: 2048
- max_prompt_length: 1800
## License
This model is licensed under the Zoom software license and is permitted for use only for noncommercial, educational, or academic research purposes.
## Citation
WPO:
```
@article{zhou2024wpo,
title={WPO: Enhancing RLHF with Weighted Preference Optimization},
author={Zhou, Wenxuan and Agrawal, Ravi and Zhang, Shujian and Indurthi, Sathish Reddy and Zhao, Sanqiang and Song, Kaiqiang and Xu, Silei and Zhu, Chenguang},
journal={arXiv preprint arXiv:2406.11827},
year={2024}
}
```
Ultrafeedback:
```
@article{cui2023ultrafeedback,
title={{UltraFeedback}: Boosting language models with high-quality feedback},
author={Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and Zhu, Wei and Ni, Yuan and Xie, Guotong and Liu, Zhiyuan and Sun, Maosong},
journal={arXiv preprint arXiv:2310.01377},
year={2023}
}
```
Armo-RM:
```
@article{ArmoRM,
title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts},
author={Haoxiang Wang and Wei Xiong and Tengyang Xie and Han Zhao and Tong Zhang},
journal={arXiv preprint arXiv:2406.12845},
}
@inproceedings{wang2024arithmetic,
title={Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards},
author={Haoxiang Wang and Yong Lin and Wei Xiong and Rui Yang and Shizhe Diao and Shuang Qiu and Han Zhao and Tong Zhang},
year={2024},
booktitle={ACL},
}
```
|
RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf
|
RichardErkhov
| 2024-09-09T02:51:20Z | 76 | 0 | null |
[
"gguf",
"arxiv:2408.11857",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-08T10:21:10Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Hermes-3-Llama-3.1-70B - GGUF
- Model creator: https://huggingface.co/NousResearch/
- Original model: https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Hermes-3-Llama-3.1-70B.Q2_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.Q2_K.gguf) | Q2_K | 24.56GB |
| [Hermes-3-Llama-3.1-70B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [Hermes-3-Llama-3.1-70B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [Hermes-3-Llama-3.1-70B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [Hermes-3-Llama-3.1-70B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [Hermes-3-Llama-3.1-70B.Q3_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.Q3_K.gguf) | Q3_K | 31.91GB |
| [Hermes-3-Llama-3.1-70B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [Hermes-3-Llama-3.1-70B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [Hermes-3-Llama-3.1-70B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
| [Hermes-3-Llama-3.1-70B.Q4_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/blob/main/Hermes-3-Llama-3.1-70B.Q4_0.gguf) | Q4_0 | 37.22GB |
| [Hermes-3-Llama-3.1-70B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [Hermes-3-Llama-3.1-70B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [Hermes-3-Llama-3.1-70B.Q4_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q4_K | 39.6GB |
| [Hermes-3-Llama-3.1-70B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [Hermes-3-Llama-3.1-70B.Q4_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q4_1 | 41.27GB |
| [Hermes-3-Llama-3.1-70B.Q5_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q5_0 | 45.32GB |
| [Hermes-3-Llama-3.1-70B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [Hermes-3-Llama-3.1-70B.Q5_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q5_K | 46.52GB |
| [Hermes-3-Llama-3.1-70B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [Hermes-3-Llama-3.1-70B.Q5_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q5_1 | 49.36GB |
| [Hermes-3-Llama-3.1-70B.Q6_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q6_K | 53.91GB |
| [Hermes-3-Llama-3.1-70B.Q8_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-3-Llama-3.1-70B-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
language:
- en
license: llama3
tags:
- Llama-3
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- function calling
- json mode
- axolotl
- roleplaying
- chat
base_model: meta-llama/Meta-Llama-3.1-70B
widget:
- example_title: Hermes 3
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence,
here to teach and assist me.
- role: user
content: What is the meaning of life?
model-index:
- name: Hermes-3-Llama-3.1-70B
results: []
---
# Hermes 3 - Llama-3.1 70B

## Model Description
Hermes 3 is the latest version of our flagship Hermes series of LLMs by Nous Research.
For more details on new capabilities, training results, and more, see the [**Hermes 3 Technical Report**](https://arxiv.org/abs/2408.11857).
Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board.
The ethos of the Hermes series of models is focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.
The Hermes 3 series builds and expands on the Hermes 2 set of capabilities, including more powerful and reliable function calling and structured output capabilities, generalist assistant capabilities, and improved code generation skills.
# Benchmarks
Hermes 3 is competitive, if not superior, to Llama-3.1 Instruct models at general capabilities, with varying strengths and weaknesses attributable between the two.
Full benchmark comparisons below:

# Prompt Format
Hermes 3 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are Hermes 3, a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 3, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 3."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Inference
Here is example code using HuggingFace Transformers to inference the model
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-3-Llama-3.1-70B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"NousResearch/Hermes-3-Llama-3.1-70B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
You can also run this model with vLLM, by running the following in your terminal after `pip install vllm`
`vllm serve NousResearch/Hermes-3-Llama-3.1-70B`
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

## Quantized Versions:
GGUF Quants: https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B-GGUF
NeuralMagic FP8 Quants: https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B-FP8
# How to cite:
```bibtext
@misc{teknium2024hermes3technicalreport,
title={Hermes 3 Technical Report},
author={Ryan Teknium and Jeffrey Quesnelle and Chen Guang},
year={2024},
eprint={2408.11857},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.11857},
}
```
|
TommyChien/memorag-mistral-7b-inst
|
TommyChien
| 2024-09-09T02:47:00Z | 213 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"custom_code",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-03T07:16:21Z |
---
license: apache-2.0
language:
- en
base_model: mistralai/Mistral-7B-Instruct-v0.2
library_name: transformers
---
For the usage of this model, please refer to [this repo](https://github.com/qhjqhj00/MemoRAG)
|
Gokul14/ga
|
Gokul14
| 2024-09-09T02:40:16Z | 9 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-09-09T02:40:11Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A person in a bustling cafe GALLIUM
output:
url: samples/1725849608607__000001000_0.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: GALLIUM
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# ga
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `GALLIUM` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/Gokul14/ga/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('Gokul14/ga', weight_name='ga')
image = pipeline('A person in a bustling cafe GALLIUM').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
MCG-NJU/VFIMamba
|
MCG-NJU
| 2024-09-09T02:31:20Z | 370 | 1 |
vfi-mamba
|
[
"vfi-mamba",
"VFIMamba",
"arxiv:2407.02315",
"license:apache-2.0",
"region:us"
] | null | 2024-09-02T13:10:12Z |
---
license: apache-2.0
library_name: vfi-mamba
---
# VFIMamba: Video Frame Interpolation with State Space Models
This is the official checkpoint library for [VFIMamba: Video Frame Interpolation with State Space Models](https://arxiv.org/abs/2407.02315).
Please refer to [this repository](https://github.com/MCG-NJU/VFIMamba) for our code.
## Model Description
VFIMamba is the first approach to adapt the SSM model to the video frame interpolation task.
1. We devise the Mixed-SSM Block (MSB) for efficient inter-frame modeling using S6.
2. We explore various rearrangement methods to convert two frames into a sequence, discovering that interleaved rearrangement is more suitable for VFI tasks.
3. We propose a curriculum learning strategy to further leverage the potential of the S6 model.
Experimental results demonstrate that VFIMamba achieves the state-of-the-art performance across various datasets, in particular highlighting the potential of the SSM model for VFI tasks with high resolution.
## Usage
We provide two models, an efficient version (VFIMamba-S) and a stronger one (VFIMamba). You can choose what you need by specifying the parameter model.
### Manually Load
Please refer to [the instruction here](https://github.com/MCG-NJU/VFIMamba/tree/main?tab=readme-ov-file#sunglassesplay-with-demos) for manually loading the checkpoints and a more customized experience.
```bash
python demo_2x.py --model **model[VFIMamba_S/VFIMamba]** # for 2x interpolation
python demo_Nx.py --n 8 --model **model[VFIMamba_S/VFIMamba]** # for 8x interpolation
```
### Hugging Face Demo
For Hugging Face demo, please refer to [the code here](https://github.com/MCG-NJU/VFIMamba/blob/main/hf_demo_2x.py).
```bash
python hf_demo_2x.py --model **model[VFIMamba_S/VFIMamba]** # for 2x interpolation
```
## Citation
If you think this project is helpful in your research or for application, please feel free to leave a star⭐️ and cite our paper:
```
@misc{zhang2024vfimambavideoframeinterpolation,
title={VFIMamba: Video Frame Interpolation with State Space Models},
author={Guozhen Zhang and Chunxu Liu and Yutao Cui and Xiaotong Zhao and Kai Ma and Limin Wang},
year={2024},
eprint={2407.02315},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.02315},
}
```
|
hf-100/Llama-3.1-Spellbound-StoryWriter-70b-instruct-0.3-16bit
|
hf-100
| 2024-09-09T02:28:45Z | 41 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-70B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-70B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-09T02:02:23Z |
---
base_model: unsloth/Meta-Llama-3.1-70B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** hf-100
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-70B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NithinKamavaram/textified-llama-3-1-8b-instruct
|
NithinKamavaram
| 2024-09-09T02:27:19Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-09-09T02:16:30Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** NithinKamavaram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JunxiongWang/MambaByte_Code
|
JunxiongWang
| 2024-09-09T02:21:13Z | 111 | 2 |
transformers
|
[
"transformers",
"pytorch",
"text-generation",
"arxiv:2401.13660",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T22:34:42Z |
---
license: apache-2.0
metrics:
- perplexity
pipeline_tag: text-generation
---
Train in 30B Byte. Mode size 353M. Table 2 in [MambaByte](https://arxiv.org/abs/2401.13660)
To use
```
import torch
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
import numpy as np
model=MambaLMHeadModel.from_pretrained("JunxiongWang/MambaByte_Code", device='cuda', dtype=torch.float32)
text = "import torch"
text_byte = np.frombuffer(text.encode('utf-8'), dtype=np.uint8)
input_ids = torch.from_numpy(text_byte[None, :].copy()).long().cuda()
sample = model.generate(
input_ids=input_ids,
max_length=2048,
cg=True,
return_dict_in_generate=True,
output_scores=True,
enable_timing=True,
temperature=1,
top_k=256,
top_p=0.9,
)
print(bytes(sample.sequences[0].tolist()).decode('utf-8'))
```
Output
```
import torch
import numpy as np
import torch.nn.functional as F
from torch.autograd import Variable
from networkx.states import TransientState
def extract_data(num_epochs, epochs, is_last_epoch):
def get_data(num_features, num_classes):
data_features = num_features
data_classes = num_classes
data_labels = num_epochs
if num_features == 0 or num_classes == 0:
return data_features, data_classes
if is_last_epoch:
data_features = num_features
data_classes = num_classes
data_labels = num_epochs
return data_features, data_classes
data_features, data_classes = get_data(num_epochs, epochs, is_last_epoch)
data_labels = num_epochs * 2
return data_features, data_classes
class NumChannel:
def __init__(self, x, y, dx=1, dy=1, idx=1, data_size=2, epoch=None):
"""idx is the channel index with data feature in the first epoch.
x is the channel of the input data.
y is the element of the input data.
dx is the element of the data feature of the input data.
data_size is the size of the element of the data.
epoch is the channel of the element of the data.
"""
self.x = x
self.y = y
self.dx = dx
self.data_size = data_size
self.epoch = epoch
self.reference_count = 0
self.data_features = {}
self.data_classes = {}
self._initialize()
if idx is not None:
self._start_time = time.time()
def _initialize(self):
"""idx is the channel index with data feature in the first epoch.
x is the channel of the input data.
y is the element of the input data.
dx is the element of the data feature of the input data.
data_size is the size of the element of the data.
epoch is the channel of the element of the data.
"""
self.idx = idx
```
|
jvelja/vllm-gemma2b-deterministic_2
|
jvelja
| 2024-09-09T02:15:34Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-09-09T02:15:31Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmpqfd559_1/jvelja/vllm-gemma2b-deterministic_2")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpqfd559_1/jvelja/vllm-gemma2b-deterministic_2")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpqfd559_1/jvelja/vllm-gemma2b-deterministic_2")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
jvelja/BERT_vllm-gemma2b-deterministic_2
|
jvelja
| 2024-09-09T02:15:30Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T02:15:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jonnhan/speecht5_finetuned_voxpopuli_nl_10000
|
Jonnhan
| 2024-09-09T02:15:22Z | 10 | 0 | null |
[
"tensorboard",
"safetensors",
"speecht5",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"region:us"
] |
text-to-speech
| 2024-09-08T19:04:33Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl_10000
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl_10000
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5645 | 1.5619 | 500 | 0.5125 |
| 0.5299 | 3.1238 | 1000 | 0.4888 |
| 0.5206 | 4.6857 | 1500 | 0.4778 |
| 0.5118 | 6.2476 | 2000 | 0.4747 |
| 0.5148 | 7.8094 | 2500 | 0.4738 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.1.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/mpt-7b-storywriter-i1-GGUF
|
mradermacher
| 2024-09-09T02:14:21Z | 53 | 0 |
transformers
|
[
"transformers",
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"en",
"dataset:the_pile_books3",
"base_model:mosaicml/mpt-7b-storywriter",
"base_model:quantized:mosaicml/mpt-7b-storywriter",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-09-09T01:10:29Z |
---
base_model: mosaicml/mpt-7b-storywriter
datasets:
- the_pile_books3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Composer
- MosaicML
- llm-foundry
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mosaicml/mpt-7b-storywriter
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ1_M.gguf) | i1-IQ1_M | 1.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ2_S.gguf) | i1-IQ2_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ2_M.gguf) | i1-IQ2_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF/resolve/main/mpt-7b-storywriter.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/mpt-7b-storywriter-GGUF
|
mradermacher
| 2024-09-09T02:14:21Z | 244 | 1 |
transformers
|
[
"transformers",
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"en",
"dataset:the_pile_books3",
"base_model:mosaicml/mpt-7b-storywriter",
"base_model:quantized:mosaicml/mpt-7b-storywriter",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-08T11:03:00Z |
---
base_model: mosaicml/mpt-7b-storywriter
datasets:
- the_pile_books3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Composer
- MosaicML
- llm-foundry
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mosaicml/mpt-7b-storywriter
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/mpt-7b-storywriter-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.IQ3_XS.gguf) | IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q4_K_S.gguf) | Q4_K_S | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q4_K_M.gguf) | Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q5_K_S.gguf) | Q5_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q5_K_M.gguf) | Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.Q8_0.gguf) | Q8_0 | 7.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-storywriter-GGUF/resolve/main/mpt-7b-storywriter.f16.gguf) | f16 | 13.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jvelja/vllm-gemma2b_2
|
jvelja
| 2024-09-09T02:12:06Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-09-09T02:12:03Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmpfg01aqk3/jvelja/vllm-gemma2b_2")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpfg01aqk3/jvelja/vllm-gemma2b_2")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpfg01aqk3/jvelja/vllm-gemma2b_2")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
jvelja/BERT_vllm-gemma2b_2
|
jvelja
| 2024-09-09T02:12:02Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T02:11:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/TheProfessor-155b-i1-GGUF
|
mradermacher
| 2024-09-09T02:07:21Z | 26 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:abacusai/TheProfessor-155b",
"base_model:quantized:abacusai/TheProfessor-155b",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-01T13:03:58Z |
---
base_model: abacusai/TheProfessor-155b
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/abacusai/TheProfessor-155b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/TheProfessor-155b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ1_S.gguf) | i1-IQ1_S | 32.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ1_M.gguf) | i1-IQ1_M | 35.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 40.9 | |
| [GGUF](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 45.5 | |
| [GGUF](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ2_S.gguf) | i1-IQ2_S | 47.8 | |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ2_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ2_M.gguf.part2of2) | i1-IQ2_M | 52.0 | |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 56.8 | IQ3_XXS probably better |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 59.4 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 63.2 | |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 66.6 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 66.8 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 69.1 | |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 74.4 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 81.0 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 82.5 | |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 87.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 87.8 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 92.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q5_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q5_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q5_K_S.gguf.part3of3) | i1-Q5_K_S | 106.4 | |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 109.3 | |
| [PART 1](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/TheProfessor-155b-i1-GGUF/resolve/main/TheProfessor-155b.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 126.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen-72B-i1-GGUF
|
mradermacher
| 2024-09-09T02:06:13Z | 37 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen",
"zh",
"en",
"base_model:Qwen/Qwen-72B",
"base_model:quantized:Qwen/Qwen-72B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-09-06T16:31:22Z |
---
base_model: Qwen/Qwen-72B
language:
- zh
- en
library_name: transformers
license: other
license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
license_name: tongyi-qianwen-license-agreement
quantized_by: mradermacher
tags:
- qwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen-72B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen-72B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ1_M.gguf) | i1-IQ1_M | 17.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q2_K.gguf) | i1-Q2_K | 26.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 28.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 30.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ3_M.gguf) | i1-IQ3_M | 34.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 36.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q4_0.gguf) | i1-Q4_0 | 41.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 41.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 45.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 53.2 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen-72B-i1-GGUF/resolve/main/Qwen-72B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 59.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
John6666/animergemeij-v10-sdxl
|
John6666
| 2024-09-09T02:05:33Z | 95 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"styles",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-09T01:58:13Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- styles
- pony
---
Original model is [here](https://civitai.com/models/734527/animergemeij?modelVersionId=821395).
This model created by [reijlita](https://civitai.com/user/reijlita).
|
John6666/zumarmix-alpha-alpha-sdxl
|
John6666
| 2024-09-09T02:04:09Z | 691 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2.5D",
"styles",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-09T01:59:14Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2.5D
- styles
- pony
---
Original model is [here](https://civitai.com/models/730544/zumarmix-alpha?modelVersionId=816913).
This model created by [dwnsty](https://civitai.com/user/dwnsty).
|
John6666/pornzilla-hentai-pony-v20-sdxl
|
John6666
| 2024-09-09T01:59:38Z | 1,068 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"game",
"pony",
"en",
"base_model:BoRnNo0b/files-mirror",
"base_model:finetune:BoRnNo0b/files-mirror",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-09T01:53:55Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- game
- pony
base_model: BoRnNo0b/files-mirror
---
Original model is [here](https://civitai.com/models/704983/pornzilla-hentai?modelVersionId=820409).
This model created by [BoRnNo0b](https://huggingface.co/BoRnNo0b).
|
cgus/Yi-Coder-1.5B-Chat-exl2
|
cgus
| 2024-09-09T01:59:15Z | 10 | 0 | null |
[
"llama",
"arxiv:2403.04652",
"base_model:01-ai/Yi-Coder-1.5B-Chat",
"base_model:quantized:01-ai/Yi-Coder-1.5B-Chat",
"license:apache-2.0",
"4-bit",
"exl2",
"region:us"
] | null | 2024-09-05T08:53:29Z |
---
license: apache-2.0
base_model: 01-ai/Yi-Coder-1.5B-Chat
---
# Yi-Coder-1.5B-Chat-exl2
Original model: [Yi-Coder-1.5B-Chat](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat)
Created by: [01-ai](https://huggingface.co/01-ai)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/Yi-Coder-1.5B-Chat-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/Yi-Coder-1.5B-Chat-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/Yi-Coder-1.5B-Chat-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/Yi-Coder-1.5B-Chat-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/Yi-Coder-1.5B-Chat-exl2/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.2.0 with the default dataset.
These quants can be used with RTX cards on Windows/Linux or AMD on Linux via Exllamav2 library available in TabbyAPI, Text-Generation-WebUI, etc.
# Original model card
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="120px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://01-ai.github.io/">💪 Tech Blog</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
Key features:
- Excelling in long-context understanding with a maximum context length of 128K tokens.
- Supporting 52 major programming languages:
```bash
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
```
For model details and benchmarks, see [Yi-Coder blog](https://01-ai.github.io/) and [Yi-Coder README](https://github.com/01-ai/Yi-Coder).
<p align="left">
<img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/yi-coder-calculator-demo.gif?raw=true" alt="demo1" width="500"/>
</p>
# Models
| Name | Type | Length | Download |
|--------------------|------|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| Yi-Coder-9B-Chat | Chat | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B-Chat) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B-Chat) |
| Yi-Coder-1.5B-Chat | Chat | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B-Chat) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B-Chat) |
| Yi-Coder-9B | Base | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B) |
| Yi-Coder-1.5B | Base | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B) |
| |
# Benchmarks
As illustrated in the figure below, Yi-Coder-9B-Chat achieved an impressive 23% pass rate in LiveCodeBench, making it the only model with under 10B parameters to surpass 20%. It also outperforms DeepSeekCoder-33B-Ins at 22.3%, CodeGeex4-9B-all at 17.8%, CodeLLama-34B-Ins at 13.3%, and CodeQwen1.5-7B-Chat at 12%.
<p align="left">
<img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/bench1.webp?raw=true" alt="bench1" width="1000"/>
</p>
# Quick Start
You can use transformers to run inference with Yi-Coder models (both chat and base versions) as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" # the device to load the model onto
model_path = "01-ai/Yi-Coder-9B-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto").eval()
prompt = "Write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
For getting up and running with Yi-Coder series models quickly, see [Yi-Coder README](https://github.com/01-ai/Yi-Coder).
|
yjwon/ub_mistral7b_sft1_dpo_beta1e-1_epoch5
|
yjwon
| 2024-09-09T01:54:25Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-09T01:50:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/Mistral-Nemo-Instruct-2407-abliterated-GGUF
|
QuantFactory
| 2024-09-09T01:53:14Z | 1,491 | 6 | null |
[
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-09T00:37:51Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
license: apache-2.0
---

# QuantFactory/Mistral-Nemo-Instruct-2407-abliterated-GGUF
This is quantized version of [natong19/Mistral-Nemo-Instruct-2407-abliterated](https://huggingface.co/natong19/Mistral-Nemo-Instruct-2407-abliterated) created using llama.cpp
# Original Model Card
# Mistral-Nemo-Instruct-2407-abliterated
## Introduction
Abliterated version of [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407), a Large Language Model (LLM) trained jointly by Mistral AI and NVIDIA that significantly outperforms existing models smaller or similar in size.
The model's strongest refusal directions have been ablated via weight orthogonalization, but the model may still refuse your request, misunderstand your intent, or provide unsolicited advice regarding ethics or safety.
## Key features
- Trained with a **128k context window**
- Trained on a large proportion of **multilingual and code data**
- Drop-in replacement of Mistral 7B
## Quickstart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "natong19/Mistral-Nemo-Instruct-2407-abliterated"
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(model_id)
conversation = [{"role": "user", "content": "Where's the capital of France?"}]
tool_use_prompt = tokenizer.apply_chat_template(
conversation,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(tool_use_prompt, return_tensors="pt", return_token_type_ids=False).to(device)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0][len(inputs["input_ids"][0]):], skip_special_tokens=True))
```
## Evaluation
Evaluation framework: lm-evaluation-harness 0.4.2
| Benchmark | Mistral-Nemo-Instruct-2407 | Mistral-Nemo-Instruct-2407-abliterated |
| :--- | :---: | :---: |
| ARC (25-shot) | 65.9 | 65.8 |
| GSM8K (5-shot) | 76.2 | 75.2 |
| HellaSwag (10-shot) | 84.3 | 84.3 |
| MMLU (5-shot) | 68.4 | 68.8 |
| TruthfulQA (0-shot) | 54.9 | 55.0 |
| Winogrande (5-shot) | 82.2 | 82.6 |
|
John6666/duchaiten-pony-classic-anime-v30-sdxl
|
John6666
| 2024-09-09T01:52:21Z | 6,226 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"game",
"cartoon",
"furry",
"classic anime styles",
"80s-90s",
"pony",
"en",
"dataset:DucHaiten/Classic-Anime",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-09-09T01:47:48Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- game
- cartoon
- furry
- classic anime styles
- 80s-90s
- pony
datasets: DucHaiten/Classic-Anime
---
Original model is [here](https://civitai.com/models/655978/duchaiten-ponyclassicanime?modelVersionId=808210). The author is [here](https://huggingface.co/DucHaiten).
This model created by [DucHaiten](https://civitai.com/user/DucHaiten).
|
mradermacher/Orca-2-13b-i1-GGUF
|
mradermacher
| 2024-09-09T01:45:25Z | 18 | 0 |
transformers
|
[
"transformers",
"gguf",
"orca",
"orca2",
"microsoft",
"en",
"base_model:microsoft/Orca-2-13b",
"base_model:quantized:microsoft/Orca-2-13b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-09-08T21:51:09Z |
---
base_model: microsoft/Orca-2-13b
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: microsoft-research-license
quantized_by: mradermacher
tags:
- orca
- orca2
- microsoft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/microsoft/Orca-2-13b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Orca-2-13b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF/resolve/main/Orca-2-13b.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Orca-2-13b-GGUF
|
mradermacher
| 2024-09-09T01:45:25Z | 95 | 0 |
transformers
|
[
"transformers",
"gguf",
"orca",
"orca2",
"microsoft",
"en",
"base_model:microsoft/Orca-2-13b",
"base_model:quantized:microsoft/Orca-2-13b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-09-06T18:12:16Z |
---
base_model: microsoft/Orca-2-13b
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: microsoft-research-license
quantized_by: mradermacher
tags:
- orca
- orca2
- microsoft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/microsoft/Orca-2-13b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Orca-2-13b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Orca-2-13b-GGUF/resolve/main/Orca-2-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
eyeluoRP/gemma-2-math-Finetune
|
eyeluoRP
| 2024-09-09T01:39:35Z | 88 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-09T01:33:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Solshine/reflection-llama-3.1-8B-Solshine-Full
|
Solshine
| 2024-09-09T01:33:20Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Solshine/reflection-llama-3.1-8B-Solshine-trainround4-16bit",
"base_model:finetune:Solshine/reflection-llama-3.1-8B-Solshine-trainround4-16bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-09T01:23:43Z |
---
base_model: Solshine/reflection-llama-3.1-8B-Solshine-trainround4-16bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Solshine
- **License:** apache-2.0
- **Finetuned from model :** Solshine/reflection-llama-3.1-8B-Solshine-trainround4-16bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hf-100/Llama-3.1-Spellbound-StoryWriter-70b-base-0.2-16bit
|
hf-100
| 2024-09-09T01:30:55Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-70B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-70B-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-09-09T01:25:52Z |
---
base_model: unsloth/Meta-Llama-3.1-70B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** hf-100
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-70B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HabibiBear/BERTSA
|
HabibiBear
| 2024-09-09T01:26:20Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T01:20:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jvelja/vllm-gemma2b_1
|
jvelja
| 2024-09-09T01:20:53Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-09-09T01:20:50Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmpo1wewhor/jvelja/vllm-gemma2b_1")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpo1wewhor/jvelja/vllm-gemma2b_1")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpo1wewhor/jvelja/vllm-gemma2b_1")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
mradermacher/ministral-1.3B-instruct-GGUF
|
mradermacher
| 2024-09-09T01:13:14Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-09T01:07:48Z |
---
base_model: dustinwloring1988/ministral-1.3B-instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/dustinwloring1988/ministral-1.3B-instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.IQ3_XS.gguf) | IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.IQ3_M.gguf) | IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ministral-1.3B-instruct-GGUF/resolve/main/ministral-1.3B-instruct.f16.gguf) | f16 | 1.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jvelja/gemma2b-NodropSus_2
|
jvelja
| 2024-09-09T01:11:27Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-09-09T01:11:24Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmpwsj7y4yh/jvelja/gemma2b-NodropSus_2")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpwsj7y4yh/jvelja/gemma2b-NodropSus_2")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpwsj7y4yh/jvelja/gemma2b-NodropSus_2")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Bo1015/proteinglm-100b-int4
|
Bo1015
| 2024-09-09T01:10:55Z | 32 | 5 |
transformers
|
[
"transformers",
"safetensors",
"ProteinGLM",
"biology",
"medical",
"custom_code",
"arxiv:2401.06199",
"endpoints_compatible",
"8-bit",
"region:us"
] | null | 2024-07-06T10:56:45Z |
---
tags:
- biology
- medical
---
# ProteinGLM
## Introduction to ProteinGLM Family Models
ProteinGLM is the open-source version of the latest protein language models towards protein understanding tasks (Masked Protein Language Models) and protein design (Casual Protein Language Models). The ProteinGLM family models are developed by Tsinghua University. Along with this, we have released the int4 quantization ProteinGLM-100B weights and other small models, which include: 1B, 3B, and 10B models trained with masked language modeling for protein understanding, and 1B, 3B, and 7B causal language models aimed at protein design.
### Out-of-Distribution Perplexity Evaluation
We evaluated the ProteinGLM (MLM or CLM) and ProteinGLM-INT4 (100B) models on two OOD test sets, one with sequence identity lower than 0.9 with the training set (<0.9 ID) and the other with sequence identity lower than 0.5 with the training set (<0.5 ID). Each OOD dataset comprises approximately 10,000 protein sequences. The MLM perplexity results, compared against ESM2-3B and ESM2-15B and the CLM perplexity againest ProGen2-xlarge (6.4B), are as follows (lower is better):
| Model | ESM2(3B)| ESM2 (15B) | PGLM (1B) | PGLM (3B) | PGLM (10B) | PGLM-INT4 (100B) |
|:--------------------|:----------:|:----------:|:----------:|:----------:|:--------------------:|:--------------------:|
| < 0.9 ID | 7.7 | 7.3 | 9.3 | 7.8 | 7.6 | **6.8** |
| < 0.5 ID | 11.5 | 11.0 | 13.5 | 11.9 | 11.6 | **10.8** |
| Model | ProGen2-xlarge (6.4B) | PGLM (1B) | PGLM (3B) | PGLM (7B) | PGLM-INT4 (100B) |
|:--------------------|:----------:|:----------:|:----------:|:--------------------:|:--------------------:|
| < 0.9 ID | 9.7 | 9.8 | 9.3 | 8.9 | **8.9** |
| < 0.5 ID | 14.3 | 14.0 | 13.7 | 13.5 | **13.5** |
## Downstream Protein Understanding Tasks Evaluation
(TODO)
## Get Started
### Model List
You can choose to manually download the necessary weights
| Model |Download |
|------------------|-----------------------------------------------------------------------------------------------------------|
| ProteinGLM-1B-MLM | [🤗 Huggingface](https://huggingface.co/Bo1015/proteinglm-1b-mlm) |
| ProteinGLM-3B-MLM | [🤗 Huggingface](https://huggingface.co/Bo1015/proteinglm-3b-mlm) |
| ProteinGLM-10B-MLM | [🤗 Huggingface](https://huggingface.co/Bo1015/proteinglm-10b-mlm) |
| ProteinGLM-1B-CLM | [🤗 Huggingface](https://huggingface.co/Bo1015/proteinglm-1b-clm) |
| ProteinGLM-3B-CLM | [🤗 Huggingface](https://huggingface.co/Bo1015/proteinglm-3b-clm) |
| ProteinGLM-7B-CLM | [🤗 Huggingface](https://huggingface.co/Bo1015/proteinglm-7b-clm) |
| ProteinGLM-INT4 (100B) (MLM or CLM) | [🤗 Huggingface](https://huggingface.co/Bo1015/proteinglm-100b-int4)| | |
## How to use
### ProteinGLM-MLM: Masked Langeuage Models for Protein Understanding Tasks
(The ProteinGLM-100B INT4 quantization requires approximately 50 GB of GPU memory. It can be inferred on a single A100/800 GPU with 80 GB of memory or across multiple GPUs totaling 60 GB.)
```python
# Obtain residue embeddings
from transformers import AutoModelForMaskedLM, AutoModelForSequenceClassification, AutoModelForTokenClassification, AutoTokenizer, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("Bo1015/proteinglm-100b-int4", trust_remote_code=True, use_fast=True)
config = AutoConfig.from_pretrained("Bo1015/proteinglm-100b-int4", trust_remote_code=True, torch_dtype=torch.half)
config.is_causal=False
config.post_layer_norm=True # use the final layernorm or not, some tasks set to false would be better.
model = AutoModelForMaskedLM.from_pretrained("Bo1015/proteinglm-100b-int4", config = config, torch_dtype=torch.half,trust_remote_code=True)
if torch.cuda.is_available():
model = model.cuda()
# # if you don't have the single gpu with 80G memory, try the dispatch load.
# from accelerate import load_checkpoint_and_dispatch, init_empty_weights
# with init_empty_weights():
# model = AutoModelForMaskedLM.from_config(config, trust_remote_code=True)
#
# model = load_checkpoint_and_dispatch(
# model, "<your model cached dir>", device_map="auto", no_split_module_classes=["xTrimoPGLMBlock"], strict=True, dtype=dtype
# )
model.eval()
seq = 'MILMCQHFSGQFSKYFLAVSSDFCHFVFPIILVSHVNFKQMKRKGFALWNDRAVPFTQGIFTTVMILLQYLHGTG'
output = tokenizer(seq, add_special_tokens=True, return_tensors='pt')
with torch.inference_mode():
inputs = {"input_ids": output["input_ids"].cuda(), "attention_mask": output["attention_mask"].cuda()}
output_embeddings = model(**inputs, output_hidden_states=True, return_last_hidden_state=True).hidden_states[:-1, 0] # get rid of the <eos> token
# model for the sequence-level tasks
model = AutoModelForSequenceClassification.from_pretrained(config, trust_remote_code=True, torch_dtype=torch.half)
# model for the token-level tasks
model = AutoModelForTokenClassification.from_pretrained(config, trust_remote_code=True, torch_dtype=torch.half)
```
### ProteinGLM-CLM: Casusal Langeuage Models for Protein Design
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("Bo1015/proteinglm-100b-int4", trust_remote_code=True, use_fast=True)
config = AutoConfig.from_pretrained("Bo1015/proteinglm-100b-int4", trust_remote_code=True, torch_dtype=torch.half)
config.is_causal=True
model = AutoModelForCausalLM.from_pretrained("Bo1015/proteinglm-100b-int4", config = config, torch_dtype=torch.half,trust_remote_code=True)
if torch.cuda.is_available():
model = model.cuda()
# # if you don't have the single gpu with 80G memory, try the dispatch load.
# from accelerate import load_checkpoint_and_dispatch, init_empty_weights
# with init_empty_weights():
# model = AutoModelForMaskedLM.from_config(config, trust_remote_code=True)
#
# model = load_checkpoint_and_dispatch(
# model, "<your model cached dir>", device_map="auto", no_split_module_classes=["xTrimoPGLMBlock"], strict=True, dtype=dtype
# )
model.eval()
gen_kwargs = {'max_length': 256, 'top_p': 0.8, 'temperature':0.9, "num_beams": 1}
prompt=['', 'MLFVVL', 'LDL', 'VTQA']
for idx, each in enumerate(prompt):
print(f"Begin generating idx: {idx} with prompt {each}")
output = model.chat(tokenizer, each, **gen_kwargs)
print(f"\nEnd generation with length: {len(output.split())} - seqs: {output}\n")
```
## LICENSE
The model in this repository is open source under the [Creative Commons Attribution-NonCommercial 4.0 International License](./LICENSE).
## Citations
If you find our work useful, please consider citing the following paper:
```
@misc{chen2024xtrimopglm,
title={xTrimoPGLM: unified 100B-scale pre-trained transformer for deciphering the language of protein},
author={Chen, Bo and Cheng, Xingyi and Li, Pan and Geng, Yangli-ao and Gong, Jing and Li, Shen and Bei, Zhilei and Tan, Xu and Wang, Boyan and Zeng, Xin and others},
year={2024},
eprint={2401.06199},
archivePrefix={arXiv},
primaryClass={cs.CL},
note={arXiv preprint arXiv:2401.06199}
}
@misc{cheng2024training,
title={Training Compute-Optimal Protein Language Models},
author={Cheng, Xingyi and Chen, Bo and Li, Pan and Gong, Jing and Tang, Jie and Song, Le},
year={2024},
note={bioRxiv, Cold Spring Harbor Laboratory, pages 2024--06}
}
```
|
Yntec/TwoAndAHalfDimensions
|
Yntec
| 2024-09-09T01:03:10Z | 720 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"2.5D",
"3D",
"Cartoon",
"Anime",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-09-08T04:20:17Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- 2.5D
- 3D
- Cartoon
- Anime
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
language:
- en
inference: true
---
# Two And A Half Dimensions
Between the Second Dimension and the Third Dimension there's this so called 2.5D, this model specializes in that, though it can do other dimensions as well.
Samples and prompts:

pop art style, wonder woman,1girl,solo,long hair,black hair,bare shoulders, beautiful detailed teal eyes,shield,superhero , detailed, 8k, trending, amazing art, colorful

1girl, dress, puffy sleeves, bonnet, big skirt, skirt whit polka dot print, short hair, looking at viewer, dynamic angle, cowboy shot, cute, cartoon, epic pose, smile, blush, wide shot, cute eyes, magic vibes, bangs, floating hair, outdoors, detailed background, cinematic, sky, clouds, Bobeep, Toy Story, Disney Studios, DisneyStyle

highres,masterpiece,perfect ligthing,bloom,cinematic lighting,adult,perfect skin,female,cowboy shot,smile,Princess Peach Movie, choker,detailled eyes,blue eyes, glossy skin,oily skin,beautiful light, day,bright,looking at viewer, 3D Pixar,pink dress,outside,nature,castle,tree,

masterpiece, best quality, Soccer Player, Pretty girl face, Soccer ball, Athlete, chibi, Sports, Stadium, Cute eyes, Competition, Team, Goal, Action, Kick, Running, Athlete, Athleticism, Training, Exercise, Fitness, Professional, Sportswoman, Ball, Uniform, Boots, Game, Victory, Skills, Speed, Agility, Passion, Dedication, Octane Render, 3D Cartoon, Teamwork, Goalkeeper, Striker, Midfielder, Defender, Referee, Spectators, Cheering, Excitement, Adrenaline, Celebration, Sportsmanship, Tackle, Dribbling, Passing, Goal-scoring, Championship, Training, Healthy Lifestyle.
|
ntviet/whisper-small-co4.2
|
ntviet
| 2024-09-09T00:54:31Z | 95 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"co",
"dataset:ntviet/Co-audio-dataset4.2",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-08T23:57:35Z |
---
library_name: transformers
language:
- co
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- ntviet/Co-audio-dataset4.2
model-index:
- name: Whisper Small Co 4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Co 4
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Co audio dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0259
- Cer Ortho: 0.0097
- Cer: 1.6949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|
| 0.0379 | 7.3171 | 600 | 0.0259 | 0.0097 | 1.6949 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/mpt-7b-8k-chat-GGUF
|
mradermacher
| 2024-09-09T00:52:18Z | 84 | 0 |
transformers
|
[
"transformers",
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"en",
"dataset:camel-ai/code",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:timdettmers/openassistant-guanaco",
"dataset:camel-ai/math",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/ai_society",
"dataset:jondurbin/airoboros-gpt4-1.2",
"dataset:LongConversations",
"dataset:camel-ai/physics",
"base_model:mosaicml/mpt-7b-8k-chat",
"base_model:quantized:mosaicml/mpt-7b-8k-chat",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-08T04:48:02Z |
---
base_model: mosaicml/mpt-7b-8k-chat
datasets:
- camel-ai/code
- ehartford/wizard_vicuna_70k_unfiltered
- anon8231489123/ShareGPT_Vicuna_unfiltered
- timdettmers/openassistant-guanaco
- camel-ai/math
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/ai_society
- jondurbin/airoboros-gpt4-1.2
- LongConversations
- camel-ai/physics
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- Composer
- MosaicML
- llm-foundry
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mosaicml/mpt-7b-8k-chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.IQ3_XS.gguf) | IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q4_K_S.gguf) | Q4_K_S | 3.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q4_K_M.gguf) | Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q5_K_S.gguf) | Q5_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q5_K_M.gguf) | Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.Q8_0.gguf) | Q8_0 | 7.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF/resolve/main/mpt-7b-8k-chat.f16.gguf) | f16 | 13.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/mpt-7b-8k-chat-i1-GGUF
|
mradermacher
| 2024-09-09T00:52:18Z | 184 | 0 |
transformers
|
[
"transformers",
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"en",
"dataset:camel-ai/code",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:timdettmers/openassistant-guanaco",
"dataset:camel-ai/math",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/ai_society",
"dataset:jondurbin/airoboros-gpt4-1.2",
"dataset:LongConversations",
"dataset:camel-ai/physics",
"base_model:mosaicml/mpt-7b-8k-chat",
"base_model:quantized:mosaicml/mpt-7b-8k-chat",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-08T22:15:33Z |
---
base_model: mosaicml/mpt-7b-8k-chat
datasets:
- camel-ai/code
- ehartford/wizard_vicuna_70k_unfiltered
- anon8231489123/ShareGPT_Vicuna_unfiltered
- timdettmers/openassistant-guanaco
- camel-ai/math
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/ai_society
- jondurbin/airoboros-gpt4-1.2
- LongConversations
- camel-ai/physics
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- Composer
- MosaicML
- llm-foundry
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mosaicml/mpt-7b-8k-chat
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mpt-7b-8k-chat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ1_M.gguf) | i1-IQ1_M | 1.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ2_S.gguf) | i1-IQ2_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ2_M.gguf) | i1-IQ2_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-chat-i1-GGUF/resolve/main/mpt-7b-8k-chat.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf
|
RichardErkhov
| 2024-09-09T00:35:24Z | 44 | 0 | null |
[
"gguf",
"arxiv:2310.04799",
"arxiv:2404.01657",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-09-08T05:28:40Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3-youko-70b-instruct - GGUF
- Model creator: https://huggingface.co/rinna/
- Original model: https://huggingface.co/rinna/llama-3-youko-70b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3-youko-70b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.Q2_K.gguf) | Q2_K | 24.56GB |
| [llama-3-youko-70b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [llama-3-youko-70b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [llama-3-youko-70b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [llama-3-youko-70b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [llama-3-youko-70b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.Q3_K.gguf) | Q3_K | 31.91GB |
| [llama-3-youko-70b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [llama-3-youko-70b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [llama-3-youko-70b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
| [llama-3-youko-70b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/blob/main/llama-3-youko-70b-instruct.Q4_0.gguf) | Q4_0 | 37.22GB |
| [llama-3-youko-70b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [llama-3-youko-70b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [llama-3-youko-70b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q4_K | 39.6GB |
| [llama-3-youko-70b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [llama-3-youko-70b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q4_1 | 41.27GB |
| [llama-3-youko-70b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q5_0 | 45.32GB |
| [llama-3-youko-70b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [llama-3-youko-70b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q5_K | 46.52GB |
| [llama-3-youko-70b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [llama-3-youko-70b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q5_1 | 49.36GB |
| [llama-3-youko-70b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q6_K | 53.91GB |
| [llama-3-youko-70b-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/rinna_-_llama-3-youko-70b-instruct-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
license: llama3
language:
- ja
- en
tags:
- llama
- llama-3
inference: false
base_model:
- rinna/llama-3-youko-70b
- meta-llama/Meta-Llama-3-70B
- meta-llama/Meta-Llama-3-70B-Instruct
base_model_relation: merge
---
# `Llama 3 Youko 70B Instruct (rinna/llama-3-youko-70b-instruct)`

# Overview
The model is the instruction-tuned version of [rinna/llama-3-youko-70b](https://huggingface.co/rinna/llama-3-youko-70b), using supervised fine-tuning (SFT) and [Chat Vector](https://arxiv.org/abs/2310.04799). It adpots the Llama-3 chat format.
| Size | Continual Pre-Training | Instruction-Tuning |
| :- | :- | :- |
| 8B | Llama 3 Youko 8B [[HF]](https://huggingface.co/rinna/llama-3-youko-8b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-gptq) | Llama 3 Youko 8B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-8b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq) |
| 70B | Llama 3 Youko 70B [[HF]](https://huggingface.co/rinna/llama-3-youko-70b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-gptq) | Llama 3 Youko 70B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-70b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-instruct-gptq) |
* **Model architecture**
A 80-layer, 8192-hidden-size transformer-based language model. Refer to the [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for architecture details.
* **Training: Built with Meta Llama 3**
**Supervised fine-tuning.** The supervised fine-tuning data is the following dataset.
- rinna Dataset
**Model merging.** The fine-tuned model (llama-3-youko-70b-sft) has been enhanced through the following chat vector addition. The chat vector was obtained by subtracting the parameter vectors of [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) from those of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
~~~~text
llama-3-youko-70b-sft + 0.5 * (meta-llama/Meta-Llama-3-70B-Instruct - meta-llama/Meta-Llama-3-70B)
~~~~
Here, the embedding layer was skipped while subtracting and adding the parameter vectors.
* **Contributors**
- [Koh Mitsuda](https://huggingface.co/mitsu-koh)
- [Xinqi Chen](https://huggingface.co/Keely0419)
- [Toshiaki Wakatsuki](https://huggingface.co/t-w)
- [Kei Sawada](https://huggingface.co/keisawada)
---
# Benchmarking
Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
---
# How to use the model
We found this instruction-tuned model tends to generate repeated text more often than its base counterpart, and thus we set repetition_penalty=1.1 for better generation performance. The same repetition penalty was applied to the instruction-tuned model in the aforementioned evaluation experiments.
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "rinna/llama-3-youko-70b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "あなたは誠実で優秀なアシスタントです。どうか、簡潔かつ正直に答えてください。"},
{"role": "user", "content": "西田幾多郎とはどんな人物ですか?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
repetition_penalty=1.1,
)
response = outputs[0][input_ids.shape[-1]:]
response = tokenizer.decode(response, skip_special_tokens=True)
print(response)
~~~~
---
# Tokenization
The model uses the original [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) tokenizer.
---
# How to cite
```bibtex
@misc{rinna-llama-3-youko-70b-instruct,
title = {rinna/llama-3-youko-70b-instruct},
author = {Mitsuda, Koh and Chen, Xinqi and Wakatsuki, Toshiaki and Sawada, Kei},
url = {https://huggingface.co/rinna/llama-3-youko-70b-instruct}
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
pages = {13898--13905},
url = {https://aclanthology.org/2024.lrec-main.1213},
note = {\url{https://arxiv.org/abs/2404.01657}}
}
```
---
# References
```bibtex
@article{llama3modelcard,
title = {Llama 3 Model Card},
author = {AI@Meta},
year = {2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
@article{huang2023chat,
title = {Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages},
author = {Huang, Shih-Cheng and Li, Pin-Zu and Hsu, Yu-Chi and Chen, Kuang-Ming and Lin, Yu Tung and Hsiao, Shih-Kai and Tzong-Han Tsai, Richard and Lee, Hung-yi},
year = {2023},
url = {https://arxiv.org/abs/2310.04799}
}
```
---
# License
[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
|
jvelja/BERT_vllm-gemma2b_0
|
jvelja
| 2024-09-09T00:29:12Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T00:29:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
benayas/distilbert-base-uncased-sst2-v0
|
benayas
| 2024-09-09T00:28:45Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T00:28:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jvelja/vllm-gemma2b-deterministic_0
|
jvelja
| 2024-09-09T00:27:21Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-09-09T00:27:19Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="jvelja//tmp/tmpm633lsak/jvelja/vllm-gemma2b-deterministic_0")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("jvelja//tmp/tmpm633lsak/jvelja/vllm-gemma2b-deterministic_0")
model = AutoModelForCausalLMWithValueHead.from_pretrained("jvelja//tmp/tmpm633lsak/jvelja/vllm-gemma2b-deterministic_0")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
jvelja/BERT_vllm-gemma2b-deterministic_0
|
jvelja
| 2024-09-09T00:27:18Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T00:27:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
diiiii/my_awesome_model
|
diiiii
| 2024-09-09T00:20:47Z | 33 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-25T13:13:25Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/ivkovicdanica555-Student/huggingface/runs/4649rwz9)
# my_awesome_model
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4386
- Accuracy: 0.8569
- Precision: 0.8576
- Recall: 0.8571
- F1: 0.8559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5211 | 1.0 | 5000 | 0.5037 | 0.8327 | 0.8408 | 0.8331 | 0.8294 |
| 0.4159 | 2.0 | 10000 | 0.4410 | 0.8517 | 0.8546 | 0.8520 | 0.8519 |
| 0.3468 | 3.0 | 15000 | 0.4386 | 0.8569 | 0.8576 | 0.8571 | 0.8559 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/granite-34b-code-instruct-8k-i1-GGUF
|
mradermacher
| 2024-09-09T00:20:17Z | 102 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"granite",
"en",
"dataset:bigcode/commitpackft",
"dataset:TIGER-Lab/MathInstruct",
"dataset:meta-math/MetaMathQA",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaive-function-calling-v2",
"dataset:bugdaryan/sql-create-context-instruction",
"dataset:garage-bAInd/Open-Platypus",
"dataset:nvidia/HelpSteer",
"base_model:ibm-granite/granite-34b-code-instruct-8k",
"base_model:quantized:ibm-granite/granite-34b-code-instruct-8k",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-08T20:15:55Z |
---
base_model: ibm-granite/granite-34b-code-instruct-8k
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
- granite
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ibm-granite/granite-34b-code-instruct-8k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ1_S.gguf) | i1-IQ1_S | 7.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ2_S.gguf) | i1-IQ2_S | 10.9 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ2_M.gguf) | i1-IQ2_M | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q2_K.gguf) | i1-Q2_K | 13.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.4 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ3_S.gguf) | i1-IQ3_S | 14.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ3_M.gguf) | i1-IQ3_M | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.3 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q4_0.gguf) | i1-Q4_0 | 19.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 19.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.5 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.8 | |
| [GGUF](https://huggingface.co/mradermacher/granite-34b-code-instruct-8k-i1-GGUF/resolve/main/granite-34b-code-instruct-8k.i1-Q6_K.gguf) | i1-Q6_K | 27.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
subhrokomol/hindi2
|
subhrokomol
| 2024-09-09T00:04:40Z | 63 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"en",
"hi",
"dataset:allenai/nllb",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-08T23:52:47Z |
---
license: apache-2.0
datasets:
- allenai/nllb
language:
- en
- hi
metrics:
- perplexity
base_model: mistralai/Mistral-7B-Instruct-v0.3
pipeline_tag: text-generation
library_name: transformers
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anggari/finetuning-sentiment-model-3000-samples
|
anggari
| 2024-09-09T00:01:12Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-09T00:00:39Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
gimmeursocks/ara-e5-small
|
gimmeursocks
| 2024-09-08T23:42:09Z | 6 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"arabic",
"triplet-loss",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-09-07T16:18:37Z |
---
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
language:
- ar
base_model: "intfloat/multilingual-e5-small"
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- arabic
- triplet-loss
widget: []
---
# Arabic NLI Triplet - Sentence Transformer Model
This repository contains a fine-tuned Sentence Transformer model trained on the "Omartificial-Intelligence-Space/Arabic-NLi-Triplet" dataset. The model is trained to generate 384-dimensional embeddings for semantic similarity tasks like paraphrase mining, sentence similarity, and clustering in Arabic.
## Model Overview
- **Model Type:** Sentence Transformer
- **Base Model:** `intfloat/multilingual-e5-small`
- **Training Dataset:** [Omartificial-Intelligence-Space/Arabic-NLi-Triplet](https://huggingface.co/datasets/Omartificial-Intelligence-Space/Arabic-NLi-Triplet)
- **Similarity Function:** Cosine Similarity
- **Embedding Dimensionality:** 384 dimensions
- **Maximum Sequence Length:** 128 tokens
- **Performance Improvement:** The model achieved around 10% improvement when tested on the test set of the provided dataset, compared to the base model's performance.
## Dataset
### Arabic NLI Triplet Dataset
The dataset contains triplets of sentences in Arabic: an anchor sentence, a positive sentence (semantically similar to the anchor), and a negative sentence (semantically dissimilar to the anchor). The dataset is designed for learning sentence representations through triplet margin loss.
Dataset Link: [Omartificial-Intelligence-Space/Arabic-NLi-Triplet](https://huggingface.co/datasets/Omartificial-Intelligence-Space/Arabic-NLi-Triplet)
## Training Process
### Loss Function: Triplet Margin Loss
We used the Triplet Margin Loss with a margin of `1.0`. The model is trained to minimize the distance between anchor and positive embeddings, while maximizing the distance between anchor and negative embeddings.
### Training Loss Progress:
Below is the training loss recorded at various steps during the training process:
| Step | Training Loss |
|-------|---------------|
| 500 | 0.136500 |
| 1000 | 0.126500 |
| 1500 | 0.127300 |
| 2000 | 0.114500 |
| 2500 | 0.110600 |
| 3000 | 0.102300 |
| 3500 | 0.101300 |
| 4000 | 0.106900 |
| 4500 | 0.097200 |
| 5000 | 0.091700 |
| 5500 | 0.092400 |
| 6000 | 0.095500 |
## Model Training Code
The model was trained using the following code (without resuming from checkpoints):
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModel, TrainingArguments, Trainer
from torch.nn import TripletMarginLoss
# Load dataset
dataset = load_dataset("Omartificial-Intelligence-Space/Arabic-NLi-Triplet")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("intfloat/multilingual-e5-small")
# Tokenize function
def tokenize_function(examples):
anchor_encodings = tokenizer(examples['anchor'], truncation=True, padding='max_length', max_length=128)
positive_encodings = tokenizer(examples['positive'], truncation=True, padding='max_length', max_length=128)
negative_encodings = tokenizer(examples['negative'], truncation=True, padding='max_length', max_length=128)
return {
'anchor_input_ids': anchor_encodings['input_ids'],
'anchor_attention_mask': anchor_encodings['attention_mask'],
'positive_input_ids': positive_encodings['input_ids'],
'positive_attention_mask': positive_encodings['attention_mask'],
'negative_input_ids': negative_encodings['input_ids'],
'negative_attention_mask': negative_encodings['attention_mask'],
}
tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=dataset["train"].column_names)
# Define triplet loss
triplet_loss = TripletMarginLoss(margin=1.0)
def compute_loss(anchor_embedding, positive_embedding, negative_embedding):
return triplet_loss(anchor_embedding, positive_embedding, negative_embedding)
# Load model
model = AutoModel.from_pretrained("intfloat/multilingual-e5-small")
class TripletTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
anchor_input_ids = inputs['anchor_input_ids'].to(self.args.device)
anchor_attention_mask = inputs['anchor_attention_mask'].to(self.args.device)
positive_input_ids = inputs['positive_input_ids'].to(self.args.device)
positive_attention_mask = inputs['positive_attention_mask'].to(self.args.device)
negative_input_ids = inputs['negative_input_ids'].to(self.args.device)
negative_attention_mask = inputs['negative_attention_mask'].to(self.args.device)
anchor_embeds = model(input_ids=anchor_input_ids, attention_mask=anchor_attention_mask).last_hidden_state.mean(dim=1)
positive_embeds = model(input_ids=positive_input_ids, attention_mask=positive_attention_mask).last_hidden_state.mean(dim=1)
negative_embeds = model(input_ids=negative_input_ids, attention_mask=negative_attention_mask).last_hidden_state.mean(dim=1)
return compute_loss(anchor_embeds, positive_embeds, negative_embeds)
# Training arguments
training_args = TrainingArguments(
output_dir="/content/drive/MyDrive/results",
learning_rate=2e-5,
per_device_train_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
logging_dir='/content/drive/MyDrive/logs',
remove_unused_columns=False,
fp16=True,
save_total_limit=3,
)
# Initialize trainer
trainer = TripletTrainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets['train'],
)
# Start training
trainer.train()
# Save model and evaluate
trainer.save_model("/content/drive/MyDrive/fine-tuned-multilingual-e5")
results = trainer.evaluate()
print(results)
```
## Framework Versions
- Python: 3.10.11
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.4.0
- Datasets: 2.21.0
## How to Use
To use the model, install the required libraries and load the model with the following code:
```bash
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
# Load the fine-tuned model
model = SentenceTransformer("gimmeursocks/ara-e5-small")
# Run inference
sentences = ['أنا سعيد', 'الجو جميل اليوم', 'هذا كلب كبير']
embeddings = model.encode(sentences)
print(embeddings.shape)
```
## Citation
If you use this model or dataset, please cite the corresponding paper or dataset source.
|
unsloth/Reflection-Llama-3.1-70B-GGUF
|
unsloth
| 2024-09-08T23:32:15Z | 27 | 4 | null |
[
"gguf",
"llama",
"unsloth",
"llama-3",
"text-generation",
"base_model:mattshumer/ref_70_e3",
"base_model:quantized:mattshumer/ref_70_e3",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-09-08T22:22:14Z |
---
base_model: mattshumer/ref_70_e3
license: llama3.1
pipeline_tag: text-generation
tags:
- unsloth
- llama-3
---
GGUF uploaded version of: https://huggingface.co/mattshumer/ref_70_e3 with suggested chat template, temperature and top_p.
# You can fine-tune Reflection-3.1 70B with 48GB of VRAM with Unsloth! 🦥
Reflection 70B is (currently) the world's top open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.
## Benchmarks

All benchmarks tested have been checked for contamination by running [LMSys's LLM Decontaminator](https://github.com/lm-sys/llm-decontaminator). When benchmarking, we isolate the `<output>` and benchmark on solely that section.
Trained from Llama 3.1 70B Instruct, you can sample from Reflection 70B using the same code, pipelines, etc. as any other Llama model. It even uses the stock Llama 3.1 chat template format (though, we've trained in a few new special tokens to aid in reasoning and reflection).
During sampling, the model will start by outputting reasoning inside `<thinking>` and `</thinking>` tags, and then once it is satisfied with its reasoning, it will output the final answer inside `<output>` and `</output>` tags. Each of these tags are special tokens, trained into the model.
This enables the model to separate its internal thoughts and reasoning from its final answer, improving the experience for the user.
Inside the `<thinking>` section, the model may output one or more `<reflection>` tags, which signals the model has caught an error in its reasoning and will attempt to correct it before providing a final answer.
## System Prompt
```
The system prompt used for training this model is:
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.
We recommend using this exact system prompt to get the best results from Reflection 70B. You may also want to experiment combining this system prompt with your own custom instructions to customize the behavior of the model.
```
## Chat Format
As mentioned above, the model uses the standard Llama 3.1 chat format. Here’s an example:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.<|eot_id|><|start_header_id|>user<|end_header_id|>
what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Tips for Performance
- We are initially recommending a `temperature` of `.7` and a `top_p` of `.95`.
- For increased accuracy, append `Think carefully.` at the end of your messages.
## Dataset / Report
Both the dataset and a brief report detailing how we trained this model will be released next week, alongside our Reflection 405B model that we expect will be the top-performing LLM in the world, including closed-source models.
---
Thanks to Jason Kuperberg and Josh Bickett from the [HyperWrite](https://hyperwriteai.com) team for reviewing drafts of the report we'll be releasing next week.
Also, we know right now the model is split into a ton of files. We'll condense this soon to make the model easier to download and work with!
|
chbenchi/mistral-reward
|
chbenchi
| 2024-09-08T23:30:49Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:chbenchi/mistral-supervised",
"base_model:finetune:chbenchi/mistral-supervised",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-09-01T21:50:40Z |
---
library_name: transformers
base_model: chbenchi/mistral-supervised
tags:
- trl
- reward-trainer
- generated_from_trainer
model-index:
- name: mistral-reward
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-reward
This model is a fine-tuned version of [chbenchi/mistral-supervised](https://huggingface.co/chbenchi/mistral-supervised) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF
|
mradermacher
| 2024-09-08T23:25:33Z | 93 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:nbeerbower/Stella-mistral-nemo-12B-v2",
"base_model:quantized:nbeerbower/Stella-mistral-nemo-12B-v2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-08T16:54:02Z |
---
base_model: nbeerbower/Stella-mistral-nemo-12B-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/Stella-mistral-nemo-12B-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Stella-mistral-nemo-12B-v2-i1-GGUF/resolve/main/Stella-mistral-nemo-12B-v2.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
markredito/speecht5_finetuned_voxpopuli_it
|
markredito
| 2024-09-08T23:22:31Z | 81 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"it",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-09-08T22:16:01Z |
---
library_name: transformers
language:
- it
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5595 | 3.0628 | 500 | 0.5110 |
| 0.5358 | 6.1256 | 1000 | 0.4954 |
| 0.5213 | 9.1884 | 1500 | 0.4886 |
| 0.5165 | 12.2511 | 2000 | 0.4872 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
judithrosell/ST_CEMS
|
judithrosell
| 2024-09-08T22:47:09Z | 125 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:m3rg-iitd/matscibert",
"base_model:finetune:m3rg-iitd/matscibert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-08T22:03:29Z |
---
library_name: transformers
license: mit
base_model: m3rg-iitd/matscibert
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ST_CEMS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ST_CEMS
This model is a fine-tuned version of [m3rg-iitd/matscibert](https://huggingface.co/m3rg-iitd/matscibert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0598
- Precision: 0.9368
- Recall: 0.9226
- F1: 0.9296
- Accuracy: 0.9898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0492 | 1.0 | 569 | 0.0347 | 0.9181 | 0.9091 | 0.9136 | 0.9881 |
| 0.0177 | 2.0 | 1138 | 0.0331 | 0.9406 | 0.9177 | 0.9290 | 0.9905 |
| 0.0109 | 3.0 | 1707 | 0.0454 | 0.9116 | 0.9122 | 0.9119 | 0.9876 |
| 0.0066 | 4.0 | 2276 | 0.0454 | 0.9596 | 0.8970 | 0.9272 | 0.9896 |
| 0.0042 | 5.0 | 2845 | 0.0477 | 0.9352 | 0.9061 | 0.9204 | 0.9889 |
| 0.0027 | 6.0 | 3414 | 0.0525 | 0.9352 | 0.9146 | 0.9248 | 0.9896 |
| 0.0018 | 7.0 | 3983 | 0.0498 | 0.9405 | 0.9159 | 0.9280 | 0.9899 |
| 0.0008 | 8.0 | 4552 | 0.0555 | 0.9312 | 0.9238 | 0.9275 | 0.9896 |
| 0.0007 | 9.0 | 5121 | 0.0602 | 0.9406 | 0.9165 | 0.9284 | 0.9897 |
| 0.0006 | 10.0 | 5690 | 0.0598 | 0.9368 | 0.9226 | 0.9296 | 0.9898 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
jaehwang/ABAP_llama-3.1-8b
|
jaehwang
| 2024-09-08T22:41:47Z | 8 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-09-08T22:17:13Z |
---
license: apache-2.0
---
|
judithrosell/ST_MAT
|
judithrosell
| 2024-09-08T22:40:26Z | 120 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:m3rg-iitd/matscibert",
"base_model:finetune:m3rg-iitd/matscibert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-08T21:59:14Z |
---
library_name: transformers
license: mit
base_model: m3rg-iitd/matscibert
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ST_MAT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ST_MAT
This model is a fine-tuned version of [m3rg-iitd/matscibert](https://huggingface.co/m3rg-iitd/matscibert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1551
- Precision: 0.8250
- Recall: 0.8333
- F1: 0.8291
- Accuracy: 0.9766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1259 | 1.0 | 569 | 0.0862 | 0.8117 | 0.7998 | 0.8057 | 0.9742 |
| 0.0476 | 2.0 | 1138 | 0.0909 | 0.8065 | 0.8154 | 0.8109 | 0.9741 |
| 0.0296 | 3.0 | 1707 | 0.1032 | 0.8039 | 0.8232 | 0.8134 | 0.9739 |
| 0.0196 | 4.0 | 2276 | 0.1157 | 0.8054 | 0.8203 | 0.8128 | 0.9745 |
| 0.0118 | 5.0 | 2845 | 0.1182 | 0.8300 | 0.8311 | 0.8305 | 0.9768 |
| 0.0074 | 6.0 | 3414 | 0.1399 | 0.8204 | 0.8151 | 0.8178 | 0.9753 |
| 0.0053 | 7.0 | 3983 | 0.1445 | 0.8334 | 0.8223 | 0.8278 | 0.9765 |
| 0.0025 | 8.0 | 4552 | 0.1521 | 0.8218 | 0.8288 | 0.8253 | 0.9758 |
| 0.0023 | 9.0 | 5121 | 0.1555 | 0.8215 | 0.8255 | 0.8235 | 0.9759 |
| 0.0016 | 10.0 | 5690 | 0.1551 | 0.8250 | 0.8333 | 0.8291 | 0.9766 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mounseflit/falcon-7b-marrakech-v2
|
mounseflit
| 2024-09-08T22:28:36Z | 16 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"falcon",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-09-08T21:22:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jyoung105/test
|
jyoung105
| 2024-09-08T21:52:10Z | 35 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-16T05:34:57Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: girl
---
# Test
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `girl` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyoung105/test', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
RichardErkhov/mhenrichsen_-_gemma-7b-gguf
|
RichardErkhov
| 2024-09-08T21:38:47Z | 63 | 0 | null |
[
"gguf",
"arxiv:2305.14314",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"endpoints_compatible",
"region:us"
] | null | 2024-09-08T14:49:15Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-7b - GGUF
- Model creator: https://huggingface.co/mhenrichsen/
- Original model: https://huggingface.co/mhenrichsen/gemma-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q2_K.gguf) | Q2_K | 3.24GB |
| [gemma-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.IQ3_XS.gguf) | IQ3_XS | 3.54GB |
| [gemma-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.IQ3_S.gguf) | IQ3_S | 3.71GB |
| [gemma-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q3_K_S.gguf) | Q3_K_S | 3.71GB |
| [gemma-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.IQ3_M.gguf) | IQ3_M | 3.82GB |
| [gemma-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q3_K.gguf) | Q3_K | 4.07GB |
| [gemma-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q3_K_M.gguf) | Q3_K_M | 4.07GB |
| [gemma-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q3_K_L.gguf) | Q3_K_L | 4.39GB |
| [gemma-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.IQ4_XS.gguf) | IQ4_XS | 4.48GB |
| [gemma-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q4_0.gguf) | Q4_0 | 4.67GB |
| [gemma-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.IQ4_NL.gguf) | IQ4_NL | 4.69GB |
| [gemma-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q4_K_S.gguf) | Q4_K_S | 4.7GB |
| [gemma-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q4_K.gguf) | Q4_K | 4.96GB |
| [gemma-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
| [gemma-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q4_1.gguf) | Q4_1 | 5.12GB |
| [gemma-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q5_0.gguf) | Q5_0 | 5.57GB |
| [gemma-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q5_K_S.gguf) | Q5_K_S | 5.57GB |
| [gemma-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q5_K.gguf) | Q5_K | 5.72GB |
| [gemma-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q5_K_M.gguf) | Q5_K_M | 5.72GB |
| [gemma-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q5_1.gguf) | Q5_1 | 6.02GB |
| [gemma-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q6_K.gguf) | Q6_K | 6.53GB |
| [gemma-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/mhenrichsen_-_gemma-7b-gguf/blob/main/gemma-7b.Q8_0.gguf) | Q8_0 | 8.21GB |
Original model description:
---
library_name: transformers
---
# Reupload of Google Gemma - Find original readme below.
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Fine-tuning examples
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314)
* A script to perform SFT using FSDP on TPU devices
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.float16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
soumickmj/StRegA_cceVAE2D_Brain_MOOD_IXIT1_IXIT2_IXIPD
|
soumickmj
| 2024-09-08T21:33:43Z | 5 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"cceVAE",
"medical",
"anomaly detection",
"VAE",
"ceVAE",
"Compact ceVAE",
"unsupervised learning",
"2D",
"StRegA",
"tumour segmentation",
"T1w",
"T2w",
"MRI",
"3T",
"BraTS",
"MOOD",
"IXI",
"brain MRI",
"anomaly",
"image-segmentation",
"custom_code",
"arxiv:2201.13271",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-09-02T12:14:26Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
- anomaly detection
- VAE
- ceVAE
- cceVAE
- Compact ceVAE
- unsupervised learning
- 2D
- StRegA
- tumour segmentation
- T1w
- T2w
- MRI
- 3T
- BraTS
- MOOD
- IXI
- brain MRI
- anomaly
library_name: pytorch
---
# PULASki_ProbUNet2D_Hausdorff_VSeg
Expert interpretation of anatomical images of the human brain is the central part of neuro-radiology. Several machine learning-based
techniques have been proposed to assist in the analysis process. However, the ML models typically need to be trained to perform
a specific task, e.g., brain tumour segmentation or classification. Not only do the corresponding training data require laborious
manual annotations, but a wide variety of abnormalities can be present in a human brain MRI- even more than one simultaneously,
which renders a representation of all possible anomalies very challenging. Hence, a possible solution is an unsupervised anomaly
detection (UAD) system that can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to
detect out-of-distribution samples. Such a technique can then be used to detect anomalies- lesions or abnormalities, for example,
brain tumours, without explicitly training the model for that specific pathology. Several Variational Autoencoder (VAE) based
techniques have been proposed in the past for this task. Even though they perform very well on controlled artificially simulated
anomalies, many of them perform poorly while detecting anomalies in clinical data. This research proposes a compact version of the
“context-encoding” VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA), which
is more robust on clinical data and shows its applicability in detecting anomalies such as tumours in brain MRIs.
## Model Details
It was introduced in [StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact Context-encoding Variational Autoencoder](https://doi.org/10.1016/j.compbiomed.2022.106093) by Soumick Chatterjee, Alessandro Sciarra, Max Dünnwald, Pavan Tummala, Shubham Kumar Agrawal, Aishwarya Jauhari, Aman Kalra, Steffen Oeltze-Jafra, Oliver Speck, Andreas Nürnberger. [ArXiv preprint]( https://arxiv.org/abs/2201.13271)
### Model Description
- **Model type:** Compact Context-encoding VAE (cceVAE) 2D
- **Task:** Anomaly detection in brain MRIs (T1w, T2w or PDw)
- **Training dataset:** MOOD T1 dataset, and brain-extracted T1w, T2w, PDw from the IXI dataset
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/soumickmj/StRegA
- **Paper:** https://doi.org/10.1016/j.compbiomed.2022.106093
- **Prepring:** https://arxiv.org/abs/2201.13271
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this approach in your research or use codes from this repository or these weights, please cite the following in your publications:
**BibTeX:**
```bibtex
@article{chatterjee2022strega,
title={StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact Context-encoding Variational Autoencoder},
author={Chatterjee, Soumick and Sciarra, Alessandro and D{\"u}nnwald, Max and Tummala, Pavan and Agrawal, Shubham Kumar and Jauhari, Aishwarya and Kalra, Aman and Oeltze-Jafra, Steffen and Speck, Oliver and N{\"u}rnberger, Andreas},
journal={Computers in Biology and Medicine},
pages={106093},
year={2022},
publisher={Elsevier},
doi={10.1016/j.compbiomed.2022.106093}
}
```
**APA:**
Chatterjee, S., Sciarra, A., Dünnwald, M., Tummala, P., Agrawal, S. K., Jauhari, A., ... & Nürnberger, A. (2022). StRegA: Unsupervised anomaly detection in brain MRIs using a compact context-encoding variational autoencoder. Computers in biology and medicine, 149, 106093.
|
soumickmj/PULASki_ProbUNet2D_Base_MSSeg
|
soumickmj
| 2024-09-08T21:18:29Z | 32 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"ProbUNet",
"medical",
"probabilistic unet",
"2D",
"PULASki",
"multiple sclerosis segmentation",
"multiple sclerosis",
"3T FLAIR",
"FLAIR",
"MRI",
"3T",
"multiple rater",
"Conditional VAE",
"distribution distance",
"image-segmentation",
"custom_code",
"arxiv:2312.15686",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-09-03T15:09:43Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
- probabilistic unet
- 2D
- PULASki
- multiple sclerosis segmentation
- multiple sclerosis
- 3T FLAIR
- FLAIR
- MRI
- 3T
- multiple rater
- Conditional VAE
- distribution distance
library_name: pytorch
---
# PULASki_ProbUNet2D_Base_MSSeg
In the domain of medical imaging, many supervised learning based methods for segmentation face several challenges such as high variability in annotations from multiple experts, paucity of labelled data and class imbalanced datasets. These issues may result in segmentations that lack the requisite precision for clinical analysis and can be misleadingly overconfident without associated uncertainty quantification.
We proposed the PULASki as a computationally efficient generative tool for biomedical image segmentation that accurately captures variability in expert annotations, even in small datasets. Our approach makes use of an improved loss function based on statistical distances in a conditional variational autoencoder structure (Probabilistic UNet), which improves learning of the conditional decoder compared to the standard cross-entropy particularly in class imbalanced problems.
## Model Details
It was introduced in [PULASki: Learning inter-rater variability using statistical distances to improve
probabilistic segmentation](https://arxiv.org/abs/2312.15686) by Soumick Chatterjee, Franziska Gaidzik, Alessandro Sciarra, Hendrik Mattern, Gábor Janiga, Oliver Speck, Andreas Nürnberger, Sahani Pathiraja.
### Model Description
- **Developed by:** Dr Soumick Chatterjee
- **Model type:** 2D Probabilistic UNet (PULASki's baseline), trained (without a distribution distance loss) with Focal Tversky loss (FTL)
- **Task:** Probabilistic multiple sclerosis (MS) segmentation in 3T MRI FLAIR volumes
- **Training dataset:** 3T FLAIR MRIs from the MS segmentation dataset of a MICCAI 2016 challenge, details mentioned in Sec. 4.1 of https://arxiv.org/pdf/2312.15686
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/soumickmj/PULASki
- **Paper:** https://arxiv.org/abs/2312.15686
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this approach in your research or use codes from this repository or these weights, please cite the following in your publications:
**BibTeX:**
```bibtex
@article{chatterjee2023pulaski,
title={PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation},
author={Chatterjee, Soumick and Gaidzik, Franziska and Sciarra, Alessandro and Mattern, Hendrik and Janiga, G{\'a}bor and Speck, Oliver and N{\"u}rnberger, Andreas and Pathiraja, Sahani},
journal={arXiv preprint arXiv:2312.15686},
year={2023}
}
```
**APA:**
Chatterjee, S., Gaidzik, F., Sciarra, A., Mattern, H., Janiga, G., Speck, O., Nuernberger, A., & Pathiraja, S. (2023). PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation. arXiv preprint arXiv:2312.15686.
|
soumickmj/PULASki_ProbUNet2D_FID_MSSeg
|
soumickmj
| 2024-09-08T21:18:05Z | 32 | 1 |
pytorch
|
[
"pytorch",
"safetensors",
"ProbUNet",
"medical",
"probabilistic unet",
"2D",
"PULASki",
"multiple sclerosis segmentation",
"multiple sclerosis",
"3T FLAIR",
"FLAIR",
"MRI",
"3T",
"multiple rater",
"Conditional VAE",
"distribution distance",
"image-segmentation",
"custom_code",
"arxiv:2312.15686",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-09-03T15:07:07Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
- probabilistic unet
- 2D
- PULASki
- multiple sclerosis segmentation
- multiple sclerosis
- 3T FLAIR
- FLAIR
- MRI
- 3T
- multiple rater
- Conditional VAE
- distribution distance
library_name: pytorch
---
# PULASki_ProbUNet2D_FID_MSSeg
In the domain of medical imaging, many supervised learning based methods for segmentation face several challenges such as high variability in annotations from multiple experts, paucity of labelled data and class imbalanced datasets. These issues may result in segmentations that lack the requisite precision for clinical analysis and can be misleadingly overconfident without associated uncertainty quantification.
We proposed the PULASki as a computationally efficient generative tool for biomedical image segmentation that accurately captures variability in expert annotations, even in small datasets. Our approach makes use of an improved loss function based on statistical distances in a conditional variational autoencoder structure (Probabilistic UNet), which improves learning of the conditional decoder compared to the standard cross-entropy particularly in class imbalanced problems.
## Model Details
It was introduced in [PULASki: Learning inter-rater variability using statistical distances to improve
probabilistic segmentation](https://arxiv.org/abs/2312.15686) by Soumick Chatterjee, Franziska Gaidzik, Alessandro Sciarra, Hendrik Mattern, Gábor Janiga, Oliver Speck, Andreas Nürnberger, Sahani Pathiraja.
### Model Description
- **Developed by:** Dr Soumick Chatterjee
- **Model type:** PULASki 2D Probabilistic UNet, trained with Fréchet inception distance (FID) loss
- **Task:** Probabilistic multiple sclerosis (MS) segmentation in 3T MRI FLAIR volumes
- **Training dataset:** 3T FLAIR MRIs from the MS segmentation dataset of a MICCAI 2016 challenge, details mentioned in Sec. 4.1 of https://arxiv.org/pdf/2312.15686
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/soumickmj/PULASki
- **Paper:** https://arxiv.org/abs/2312.15686
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this approach in your research or use codes from this repository or these weights, please cite the following in your publications:
**BibTeX:**
```bibtex
@article{chatterjee2023pulaski,
title={PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation},
author={Chatterjee, Soumick and Gaidzik, Franziska and Sciarra, Alessandro and Mattern, Hendrik and Janiga, G{\'a}bor and Speck, Oliver and N{\"u}rnberger, Andreas and Pathiraja, Sahani},
journal={arXiv preprint arXiv:2312.15686},
year={2023}
}
```
**APA:**
Chatterjee, S., Gaidzik, F., Sciarra, A., Mattern, H., Janiga, G., Speck, O., Nuernberger, A., & Pathiraja, S. (2023). PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation. arXiv preprint arXiv:2312.15686.
|
soumickmj/PULASki_ProbUNet3D_Base_VSeg
|
soumickmj
| 2024-09-08T21:16:11Z | 9 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"ProbUNet",
"medical",
"probabilistic unet",
"3D",
"PULASki",
"vessel segmentation",
"7T MRA-ToF",
"MRA",
"TOF",
"MRI",
"7T",
"Conditional VAE",
"distribution distance",
"image-segmentation",
"custom_code",
"arxiv:2312.15686",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-09-03T15:00:54Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
- probabilistic unet
- 3D
- PULASki
- vessel segmentation
- 7T MRA-ToF
- MRA
- TOF
- MRI
- 7T
- Conditional VAE
- distribution distance
library_name: pytorch
---
# PULASki_ProbUNet3D_Base_VSeg
In the domain of medical imaging, many supervised learning based methods for segmentation face several challenges such as high variability in annotations from multiple experts, paucity of labelled data and class imbalanced datasets. These issues may result in segmentations that lack the requisite precision for clinical analysis and can be misleadingly overconfident without associated uncertainty quantification.
We proposed the PULASki as a computationally efficient generative tool for biomedical image segmentation that accurately captures variability in expert annotations, even in small datasets. Our approach makes use of an improved loss function based on statistical distances in a conditional variational autoencoder structure (Probabilistic UNet), which improves learning of the conditional decoder compared to the standard cross-entropy particularly in class imbalanced problems.
## Model Details
It was introduced in [PULASki: Learning inter-rater variability using statistical distances to improve
probabilistic segmentation](https://arxiv.org/abs/2312.15686) by Soumick Chatterjee, Franziska Gaidzik, Alessandro Sciarra, Hendrik Mattern, Gábor Janiga, Oliver Speck, Andreas Nürnberger, Sahani Pathiraja.
### Model Description
- **Developed by:** Dr Soumick Chatterjee
- **Model type:** 3D Probabilistic UNet (PULASki's baseline), trained (without a distribution distance loss) with Focal Tversky loss (FTL)
- **Task:** Probabilistic vessel segmentation in 7T MRA-ToF volumes
- **Training dataset:** 7T MRA-ToF volumes, details mentioned in Sec. 4.1 of https://arxiv.org/pdf/2312.15686
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/soumickmj/PULASki
- **Paper:** https://arxiv.org/abs/2312.15686
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this approach in your research or use codes from this repository or these weights, please cite the following in your publications:
**BibTeX:**
```bibtex
@article{chatterjee2023pulaski,
title={PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation},
author={Chatterjee, Soumick and Gaidzik, Franziska and Sciarra, Alessandro and Mattern, Hendrik and Janiga, G{\'a}bor and Speck, Oliver and N{\"u}rnberger, Andreas and Pathiraja, Sahani},
journal={arXiv preprint arXiv:2312.15686},
year={2023}
}
```
**APA:**
Chatterjee, S., Gaidzik, F., Sciarra, A., Mattern, H., Janiga, G., Speck, O., Nuernberger, A., & Pathiraja, S. (2023). PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation. arXiv preprint arXiv:2312.15686.
|
soumickmj/PULASki_ProbUNet3D_Sinkhorn_VSeg
|
soumickmj
| 2024-09-08T21:15:40Z | 6 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"ProbUNet",
"medical",
"probabilistic unet",
"3D",
"PULASki",
"vessel segmentation",
"7T MRA-ToF",
"MRA",
"TOF",
"MRI",
"7T",
"Conditional VAE",
"distribution distance",
"image-segmentation",
"custom_code",
"arxiv:2312.15686",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-09-05T15:50:07Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
- probabilistic unet
- 3D
- PULASki
- vessel segmentation
- 7T MRA-ToF
- MRA
- TOF
- MRI
- 7T
- Conditional VAE
- distribution distance
library_name: pytorch
---
# PULASki_ProbUNet3D_Sinkhorn_VSeg
In the domain of medical imaging, many supervised learning based methods for segmentation face several challenges such as high variability in annotations from multiple experts, paucity of labelled data and class imbalanced datasets. These issues may result in segmentations that lack the requisite precision for clinical analysis and can be misleadingly overconfident without associated uncertainty quantification.
We proposed the PULASki as a computationally efficient generative tool for biomedical image segmentation that accurately captures variability in expert annotations, even in small datasets. Our approach makes use of an improved loss function based on statistical distances in a conditional variational autoencoder structure (Probabilistic UNet), which improves learning of the conditional decoder compared to the standard cross-entropy particularly in class imbalanced problems.
## Model Details
It was introduced in [PULASki: Learning inter-rater variability using statistical distances to improve
probabilistic segmentation](https://arxiv.org/abs/2312.15686) by Soumick Chatterjee, Franziska Gaidzik, Alessandro Sciarra, Hendrik Mattern, Gábor Janiga, Oliver Speck, Andreas Nürnberger, Sahani Pathiraja.
### Model Description
- **Developed by:** Dr Soumick Chatterjee
- **Model type:** PULASki 3D Probabilistic UNet, trained with Sinkhorn loss
- **Task:** Probabilistic vessel segmentation in 7T MRA-ToF volumes
- **Training dataset:** 7T MRA-ToF volumes, details mentioned in Sec. 4.1 of https://arxiv.org/pdf/2312.15686
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/soumickmj/PULASki
- **Paper:** https://arxiv.org/abs/2312.15686
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this approach in your research or use codes from this repository or these weights, please cite the following in your publications:
**BibTeX:**
```bibtex
@article{chatterjee2023pulaski,
title={PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation},
author={Chatterjee, Soumick and Gaidzik, Franziska and Sciarra, Alessandro and Mattern, Hendrik and Janiga, G{\'a}bor and Speck, Oliver and N{\"u}rnberger, Andreas and Pathiraja, Sahani},
journal={arXiv preprint arXiv:2312.15686},
year={2023}
}
```
**APA:**
Chatterjee, S., Gaidzik, F., Sciarra, A., Mattern, H., Janiga, G., Speck, O., Nuernberger, A., & Pathiraja, S. (2023). PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation. arXiv preprint arXiv:2312.15686.
|
soumickmj/PULASki_ProbUNet2D_Sinkhorn_VSeg
|
soumickmj
| 2024-09-08T21:14:03Z | 7 | 0 |
pytorch
|
[
"pytorch",
"safetensors",
"ProbUNet",
"medical",
"probabilistic unet",
"2D",
"PULASki",
"vessel segmentation",
"7T MRA-ToF",
"MRA",
"TOF",
"MRI",
"7T",
"Conditional VAE",
"distribution distance",
"image-segmentation",
"custom_code",
"arxiv:2312.15686",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-09-05T17:03:09Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
tags:
- medical
- probabilistic unet
- 2D
- PULASki
- vessel segmentation
- 7T MRA-ToF
- MRA
- TOF
- MRI
- 7T
- Conditional VAE
- distribution distance
library_name: pytorch
---
# PULASki_ProbUNet2D_Sinkhorn_VSeg
In the domain of medical imaging, many supervised learning based methods for segmentation face several challenges such as high variability in annotations from multiple experts, paucity of labelled data and class imbalanced datasets. These issues may result in segmentations that lack the requisite precision for clinical analysis and can be misleadingly overconfident without associated uncertainty quantification.
We proposed the PULASki as a computationally efficient generative tool for biomedical image segmentation that accurately captures variability in expert annotations, even in small datasets. Our approach makes use of an improved loss function based on statistical distances in a conditional variational autoencoder structure (Probabilistic UNet), which improves learning of the conditional decoder compared to the standard cross-entropy particularly in class imbalanced problems.
## Model Details
It was introduced in [PULASki: Learning inter-rater variability using statistical distances to improve
probabilistic segmentation](https://arxiv.org/abs/2312.15686) by Soumick Chatterjee, Franziska Gaidzik, Alessandro Sciarra, Hendrik Mattern, Gábor Janiga, Oliver Speck, Andreas Nürnberger, Sahani Pathiraja.
### Model Description
- **Developed by:** Dr Soumick Chatterjee
- **Model type:** PULASki 2D Probabilistic UNet, trained with Sinkhorn loss
- **Task:** Probabilistic vessel segmentation in 7T MRA-ToF volumes
- **Training dataset:** 7T MRA-ToF volumes, details mentioned in Sec. 4.1 of https://arxiv.org/pdf/2312.15686
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/soumickmj/PULASki
- **Paper:** https://arxiv.org/abs/2312.15686
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you use this approach in your research or use codes from this repository or these weights, please cite the following in your publications:
**BibTeX:**
```bibtex
@article{chatterjee2023pulaski,
title={PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation},
author={Chatterjee, Soumick and Gaidzik, Franziska and Sciarra, Alessandro and Mattern, Hendrik and Janiga, G{\'a}bor and Speck, Oliver and N{\"u}rnberger, Andreas and Pathiraja, Sahani},
journal={arXiv preprint arXiv:2312.15686},
year={2023}
}
```
**APA:**
Chatterjee, S., Gaidzik, F., Sciarra, A., Mattern, H., Janiga, G., Speck, O., Nuernberger, A., & Pathiraja, S. (2023). PULASki: Learning inter-rater variability using statistical distances to improve probabilistic segmentation. arXiv preprint arXiv:2312.15686.
|
sr1ncvs/whisper-small-hi
|
sr1ncvs
| 2024-09-08T20:58:51Z | 76 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-08T18:40:18Z |
---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 57.06848387369847
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Hindi
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5835
- Wer: 57.0685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3139 | 2.4450 | 1000 | 0.5455 | 61.3180 |
| 0.1935 | 4.8900 | 2000 | 0.5213 | 56.7764 |
| 0.1051 | 7.3350 | 3000 | 0.5624 | 56.8230 |
| 0.0858 | 9.7800 | 4000 | 0.5835 | 57.0685 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/calme-2.2-qwen2-72b-i1-GGUF
|
mradermacher
| 2024-09-08T20:54:12Z | 27 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"qwen",
"qwen2",
"finetune",
"chatml",
"en",
"dataset:MaziyarPanahi/truthy-dpo-v0.1-axolotl",
"base_model:MaziyarPanahi/calme-2.2-qwen2-72b",
"base_model:quantized:MaziyarPanahi/calme-2.2-qwen2-72b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-01T05:43:19Z |
---
base_model: MaziyarPanahi/calme-2.2-qwen2-72b
datasets:
- MaziyarPanahi/truthy-dpo-v0.1-axolotl
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
license_name: tongyi-qianwen
model_creator: MaziyarPanahi
model_name: calme-2.2-qwen2-72b
quantized_by: mradermacher
tags:
- chat
- qwen
- qwen2
- finetune
- chatml
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/calme-2.2-qwen2-72b-i1-GGUF/resolve/main/calme-2.2-qwen2-72b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/calme-2.3-legalkit-8b-GGUF
|
mradermacher
| 2024-09-08T20:50:08Z | 76 | 1 |
transformers
|
[
"transformers",
"gguf",
"chat",
"llama",
"llama3",
"llama3.1",
"finetune",
"legal",
"legalkit",
"fr",
"en",
"dataset:MaziyarPanahi/calme-legalkit-v0.1",
"base_model:MaziyarPanahi/calme-2.3-legalkit-8b",
"base_model:quantized:MaziyarPanahi/calme-2.3-legalkit-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-07T22:10:33Z |
---
base_model: MaziyarPanahi/calme-2.3-legalkit-8b
datasets:
- MaziyarPanahi/calme-legalkit-v0.1
language:
- fr
- en
library_name: transformers
license: llama3.1
model_creator: MaziyarPanahi
model_name: calme-2.3-legalkit-8b
quantized_by: mradermacher
tags:
- chat
- llama
- llama3
- llama3.1
- finetune
- legal
- legalkit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/calme-2.3-legalkit-8b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/calme-2.3-legalkit-8b-GGUF/resolve/main/calme-2.3-legalkit-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ChatWaifu_v1.2.1-i1-GGUF
|
mradermacher
| 2024-09-08T20:48:53Z | 121 | 2 |
transformers
|
[
"transformers",
"gguf",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"base_model:spow12/ChatWaifu_v1.2.1",
"base_model:quantized:spow12/ChatWaifu_v1.2.1",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-09T11:35:45Z |
---
base_model: spow12/ChatWaifu_v1.2.1
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/spow12/ChatWaifu_v1.2.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_v1.2.1-i1-GGUF/resolve/main/ChatWaifu_v1.2.1.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/ataraxy-actually-9b-i1-GGUF
|
mradermacher
| 2024-09-08T20:47:18Z | 156 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:lodrick-the-lafted/ataraxy-actually-9b",
"base_model:quantized:lodrick-the-lafted/ataraxy-actually-9b",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-09-08T16:49:31Z |
---
base_model: lodrick-the-lafted/ataraxy-actually-9b
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lodrick-the-lafted/ataraxy-actually-9b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ataraxy-actually-9b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/ataraxy-actually-9b-i1-GGUF/resolve/main/ataraxy-actually-9b.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/gemma-2-9b-it-WPO-HB-GGUF
|
mradermacher
| 2024-09-08T20:46:34Z | 42 | 3 |
transformers
|
[
"transformers",
"gguf",
"alignment-handbook",
"gemma",
"en",
"dataset:wzhouad/gemma-2-ultrafeedback-hybrid",
"base_model:wzhouad/gemma-2-9b-it-WPO-HB",
"base_model:quantized:wzhouad/gemma-2-9b-it-WPO-HB",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-14T00:03:54Z |
---
base_model: wzhouad/gemma-2-9b-it-WPO-HB
datasets:
- wzhouad/gemma-2-ultrafeedback-hybrid
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- alignment-handbook
- gemma
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.IQ3_XS.gguf) | IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.IQ3_S.gguf) | IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.IQ3_M.gguf) | IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF
|
mradermacher
| 2024-09-08T20:46:18Z | 210 | 2 |
transformers
|
[
"transformers",
"gguf",
"alignment-handbook",
"gemma",
"en",
"dataset:wzhouad/gemma-2-ultrafeedback-hybrid",
"base_model:wzhouad/gemma-2-9b-it-WPO-HB",
"base_model:quantized:wzhouad/gemma-2-9b-it-WPO-HB",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-14T02:05:35Z |
---
base_model: wzhouad/gemma-2-9b-it-WPO-HB
datasets:
- wzhouad/gemma-2-ultrafeedback-hybrid
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- alignment-handbook
- gemma
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9b-it-WPO-HB-i1-GGUF/resolve/main/gemma-2-9b-it-WPO-HB.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF
|
mradermacher
| 2024-09-08T20:41:57Z | 284 | 0 |
transformers
|
[
"transformers",
"gguf",
"axolotl",
"generated_from_trainer",
"en",
"dataset:Magpie-Align/Magpie-Reasoning-150K",
"dataset:Magpie-Align/Magpie-Pro-MT-300K-v0.1",
"base_model:Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.1",
"base_model:quantized:Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.1",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-20T08:08:17Z |
---
base_model: Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.1
datasets:
- Magpie-Align/Magpie-Reasoning-150K
- Magpie-Align/Magpie-Pro-MT-300K-v0.1
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- axolotl
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.1-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/magnum-v2-123b-i1-GGUF
|
mradermacher
| 2024-09-08T20:41:33Z | 14 | 6 |
transformers
|
[
"transformers",
"gguf",
"chat",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"dataset:Doctor-Shotgun/C2-Stheno",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"base_model:anthracite-org/magnum-v2-123b",
"base_model:quantized:anthracite-org/magnum-v2-123b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-20T12:33:18Z |
---
base_model: anthracite-org/magnum-v2-123b
datasets:
- Doctor-Shotgun/C2-Stheno
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/nopm_claude_writing_fixed
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
library_name: transformers
license: other
license_link: https://mistral.ai/licenses/MRL-0.1.md
license_name: mrl
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/anthracite-org/magnum-v2-123b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/magnum-v2-123b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ1_S.gguf) | i1-IQ1_S | 26.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ1_M.gguf) | i1-IQ1_M | 28.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ2_S.gguf) | i1-IQ2_S | 38.5 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ2_M.gguf) | i1-IQ2_M | 41.7 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q2_K.gguf) | i1-Q2_K | 45.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.2 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.9 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 64.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.5 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 69.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/magnum-v2-123b-i1-GGUF/resolve/main/magnum-v2-123b.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 100.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/magnum-v2-72b-GGUF
|
mradermacher
| 2024-09-08T20:40:37Z | 163 | 1 |
transformers
|
[
"transformers",
"gguf",
"chat",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"dataset:Doctor-Shotgun/C2-Stheno",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"base_model:anthracite-org/magnum-v2-72b",
"base_model:quantized:anthracite-org/magnum-v2-72b",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-21T19:25:37Z |
---
base_model: anthracite-org/magnum-v2-72b
datasets:
- Doctor-Shotgun/C2-Stheno
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/nopm_claude_writing_fixed
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
library_name: transformers
license: other
license_link: https://huggingface.co/anthracite-org/magnum-v2-72b/blob/main/LICENSE
license_name: tongyi-qianwen
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/anthracite-org/magnum-v2-72b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.IQ3_XS.gguf) | IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.IQ3_S.gguf) | IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.IQ3_M.gguf) | IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-72b-GGUF/resolve/main/magnum-v2-72b.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/magnum-v2-72b-i1-GGUF
|
mradermacher
| 2024-09-08T20:40:26Z | 453 | 5 |
transformers
|
[
"transformers",
"gguf",
"chat",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"dataset:Doctor-Shotgun/C2-Stheno",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"base_model:anthracite-org/magnum-v2-72b",
"base_model:quantized:anthracite-org/magnum-v2-72b",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-21T23:50:57Z |
---
base_model: anthracite-org/magnum-v2-72b
datasets:
- Doctor-Shotgun/C2-Stheno
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/nopm_claude_writing_fixed
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
library_name: transformers
license: other
license_link: https://huggingface.co/anthracite-org/magnum-v2-72b/blob/main/LICENSE
license_name: tongyi-qianwen
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/anthracite-org/magnum-v2-72b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/magnum-v2-72b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v2-72b-i1-GGUF/resolve/main/magnum-v2-72b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF
|
mradermacher
| 2024-09-08T20:40:15Z | 998 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored",
"base_model:quantized:SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-22T05:27:07Z |
---
base_model: SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.IQ3_XS.gguf) | IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.IQ3_S.gguf) | IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.IQ3_M.gguf) | IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-3.5-mini-instruct_Uncensored-GGUF/resolve/main/Phi-3.5-mini-instruct_Uncensored.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Sellen-8B-model_stock-i1-GGUF
|
mradermacher
| 2024-09-08T20:38:31Z | 58 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-22T18:49:45Z |
---
base_model: DreadPoor/Sellen-8B-model_stock
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DreadPoor/Sellen-8B-model_stock
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Sellen-8B-model_stock-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Sellen-8B-model_stock-i1-GGUF/resolve/main/Sellen-8B-model_stock.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/LightGPT-0.5B-Qwen2-GGUF
|
mradermacher
| 2024-09-08T20:38:21Z | 31 | 0 |
transformers
|
[
"transformers",
"gguf",
"pytorch",
"llama-2",
"traffic signal control",
"lightgpt",
"llmlight",
"en",
"base_model:lightgpt/LightGPT-0.5B-Qwen2",
"base_model:quantized:lightgpt/LightGPT-0.5B-Qwen2",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-22T19:38:27Z |
---
base_model: lightgpt/LightGPT-0.5B-Qwen2
extra_gated_heading: Access LLMLight-LightGPT on Hugging Face
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- pytorch
- llama-2
- traffic signal control
- lightgpt
- llmlight
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lightgpt/LightGPT-0.5B-Qwen2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.IQ3_XS.gguf) | IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.IQ3_M.gguf) | IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-0.5B-Qwen2-GGUF/resolve/main/LightGPT-0.5B-Qwen2.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/LightGPT-7B-Llama2-i1-GGUF
|
mradermacher
| 2024-09-08T20:38:10Z | 36 | 0 |
transformers
|
[
"transformers",
"gguf",
"pytorch",
"llama-2",
"traffic signal control",
"lightgpt",
"llmlight",
"en",
"base_model:lightgpt/LightGPT-7B-Llama2",
"base_model:quantized:lightgpt/LightGPT-7B-Llama2",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-08-22T20:34:57Z |
---
base_model: lightgpt/LightGPT-7B-Llama2
extra_gated_button_content: Submit
extra_gated_fields:
Affiliation: text
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
Country: country
Date of birth: date_picker
First Name: text
Last Name: text
geo: ip_location
extra_gated_heading: Access LLMLight-LightGPT on Hugging Face
extra_gated_prompt: "### LLAMA 2 COMMUNITY LICENSE AGREEMENT\n\"Agreement\" means
the terms and conditions for use, reproduction, distribution and modification of
the Llama Materials set forth herein. \n\"Documentation\" means the specifications,
manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
\ \n\"Licensee\" or \"you\" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entity's behalf), of
the age required under applicable laws, rules or regulations to provide legal consent
and that has legal authority to bind your employer or such other person or entity
if you are entering in this Agreement on their behalf. \n\"Llama 2\" means the
foundational large language models and software and algorithms, including machine-learning
model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta
at ai.meta.com/resources/models-and-libraries/llama-downloads/.\n\"Llama Materials\"
means, collectively, Meta's proprietary Llama 2 and documentation (and any portion
thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms
Ireland Limited (if you are located in or, if you are an entity, your principal
place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you
are located outside of the EEA or Switzerland). \n\nBy clicking \"I Accept\" below
or by using or distributing any portion or element of the Llama Materials, you agree
to be bound by this Agreement.\n1. License Rights and Redistribution. \na. Grant
of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free
limited license under Meta's intellectual property or other rights owned by Meta
embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative
works of, and make modifications to the Llama Materials. \nb. Redistribution and
Use.\ni. If you distribute or make the Llama Materials, or any derivative works
\ thereof, available to a third party, you shall provide a copy of this Agreement
to such third party. \nii. If you receive Llama Materials, or any derivative works
thereof, from a Licensee as part of an integrated end user product, then Section
2 of this Agreement will not apply to you. \niii. You must retain in all copies
of the Llama Materials that you distribute the following attribution notice within
a \"Notice\" text file distributed as a part of such copies: \"Llama 2 is licensed
under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights
Reserved.\"\niv. Your use of the Llama Materials must comply with applicable laws
\ and regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy),
which is hereby incorporated by reference into this Agreement.\nv. You will not
use the Llama Materials or any output or results of the Llama Materials to improve
any other large language model (excluding Llama 2 or derivative works thereof).
\ \n\n2. Additional Commercial Terms. If, on the Llama 2 version release date, the
\ monthly active users of the products or services made available by or for Licensee,
\ or Licensee's affiliates, is greater than 700 million monthly active users in
the preceding calendar month, you must request a license from Meta, which Meta
may grant to you in its sole discretion, and you are not authorized to exercise
any of the rights under this Agreement unless or until Meta otherwise expressly
grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE
LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON
AN \"AS IS\" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,
OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING
THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY
RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4.
Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,
OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,
SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META
OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\n5.
Intellectual Property.\na. No trademark licenses are granted under this Agreement,
and in connection with the Llama Materials, neither Meta nor Licensee may use any
name or mark owned by or associated with the other or any of its affiliates, except
as required for reasonable and customary use in describing and redistributing the
\ Llama Materials.\nb. Subject to Meta's ownership of Llama Materials and derivatives
made by or for Meta, with respect to any derivative works and modifications of
the Llama Materials that are made by you, as between you and Meta, you are and
will be the owner of such derivative works and modifications.\nc. If you institute
litigation or other proceedings against Meta or any entity (including a cross-claim
or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs
or results, or any portion of any of the foregoing, constitutes infringement of
intellectual property or other rights owned or licensable by you, then any licenses
granted to you under this Agreement shall terminate as of the date such litigation
or claim is filed or instituted. You will indemnify and hold harmless Meta from
and against any claim by any third party arising out of or related to your use
or distribution of the Llama Materials.\n6. Term and Termination. The term of this
Agreement will commence upon your acceptance of this Agreement or access to the
Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement
if you are in breach of any term or condition of this Agreement. Upon termination
of this Agreement, you shall delete and cease use of the Llama Materials. Sections
3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law
and Jurisdiction. This Agreement will be governed and construed under the laws
of the State of California without regard to choice of law principles, and the
UN Convention on Contracts for the International Sale of Goods does not apply to
this Agreement. The courts of California shall have exclusive jurisdiction of any
dispute arising out of this Agreement. \n### Llama 2 Acceptable Use Policy\nMeta
is committed to promoting safe and fair use of its tools and features, including
Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”).
The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).\n####
Prohibited Uses\nWe want everyone to use Llama 2 safely and responsibly. You agree
you will not use, or allow others to use, Llama 2 to:\n1. Violate the law or others’
rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage,
plan, incite, or further illegal or unlawful activity or content, such as: \n 1.
Violence or terrorism \n 2. Exploitation or harm to children, including
the solicitation, creation, acquisition, or dissemination of child exploitative
content or failure to report Child Sexual Abuse Material\n 3. Human trafficking,
exploitation, and sexual violence\n 4. The illegal distribution of information
or materials to minors, including obscene materials, or failure to employ legally
required age-gating in connection with such information or materials.\n 5.
Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in,
promote, incite, or facilitate the harassment, abuse, threatening, or bullying of
individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate
discrimination or other unlawful or harmful conduct in the provision of employment,
employment benefits, credit, housing, other economic benefits, or other essential
goods and services\n 4. Engage in the unauthorized or unlicensed practice of
any profession including, but not limited to, financial, legal, medical/health,
or related professional practices \n 5. Collect, process, disclose, generate,
or infer health, demographic, or other sensitive personal or private information
about individuals without rights and consents required by applicable laws\n 6.
Engage in or facilitate any action or generate any content that infringes, misappropriates,
or otherwise violates any third-party rights, including the outputs or results of
any products or services using the Llama 2 Materials\n 7. Create, generate,
or facilitate the creation of malicious code, malware, computer viruses or do anything
else that could disable, overburden, interfere with or impair the proper working,
integrity, operation or appearance of a website or computer system \n2. Engage in,
promote, incite, facilitate, or assist in the planning or development of activities
that present a risk of death or bodily harm to individuals, including use of Llama
2 related to the following:\n 1. Military, warfare, nuclear industries or applications,
espionage, use for materials or activities that are subject to the International
Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n
\ 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs
and regulated/controlled substances\n 4. Operation of critical infrastructure,
transportation technologies, or heavy machinery\n 5. Self-harm or harm to others,
including suicide, cutting, and eating disorders\n 6. Any content intended to
incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3.
Intentionally deceive or mislead others, including use of Llama 2 related to the
following:\n 1. Generating, promoting, or furthering fraud or the creation or
promotion of disinformation\n 2. Generating, promoting, or furthering defamatory
content, including the creation of defamatory statements, images, or other content\n
\ 3. Generating, promoting, or further distributing spam\n 4. Impersonating
another individual without consent, authorization, or legal right\n 5. Representing
that the use of Llama 2 or outputs are human-generated\n 6. Generating or facilitating
false online engagement, including fake reviews and other means of fake online engagement
\n 4. Fail to appropriately disclose to end users any known dangers of your AI
system \nPlease report any violation of this Policy, software “bug,” or other problems
that could lead to a violation of this Policy through one of the following means:
\n * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)\n
\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n
\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of
Llama: [[email protected]](mailto:[email protected])"
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- pytorch
- llama-2
- traffic signal control
- lightgpt
- llmlight
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lightgpt/LightGPT-7B-Llama2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LightGPT-7B-Llama2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/LightGPT-7B-Llama2-i1-GGUF/resolve/main/LightGPT-7B-Llama2.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Llama3.1-vodka-GGUF
|
mradermacher
| 2024-09-08T20:36:47Z | 27 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"llama-3",
"uncensored",
"mergekit",
"merge",
"en",
"base_model:agentlans/Llama3.1-vodka",
"base_model:quantized:agentlans/Llama3.1-vodka",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-24T09:10:54Z |
---
base_model: agentlans/Llama3.1-vodka
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama
- llama-3
- uncensored
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/agentlans/Llama3.1-vodka
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF/resolve/main/Llama3.1-vodka.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF
|
mradermacher
| 2024-09-08T20:36:42Z | 288 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"dpo",
"en",
"base_model:VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct",
"base_model:quantized:VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-24T10:07:29Z |
---
base_model: VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/VivekChauhan06/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct-GGUF/resolve/main/Straw-Hat-Coding-Assistant-Llama-3.1-8B-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama3.1-vodka-i1-GGUF
|
mradermacher
| 2024-09-08T20:36:34Z | 5 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama",
"llama-3",
"uncensored",
"mergekit",
"merge",
"en",
"base_model:agentlans/Llama3.1-vodka",
"base_model:quantized:agentlans/Llama3.1-vodka",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-24T12:15:00Z |
---
base_model: agentlans/Llama3.1-vodka
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama
- llama-3
- uncensored
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/agentlans/Llama3.1-vodka
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama3.1-vodka-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-vodka-i1-GGUF/resolve/main/Llama3.1-vodka.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.