modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf | RichardErkhov | 2024-07-02T18:00:12Z | 0 | 0 | null | [
"gguf",
"arxiv:2312.15166",
"region:us"
] | null | 2024-07-02T13:07:42Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Twice-KoSOLAR-16.1B-instruct-test - GGUF
- Model creator: https://huggingface.co/PracticeLLM/
- Original model: https://huggingface.co/PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Twice-KoSOLAR-16.1B-instruct-test.Q2_K.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q2_K.gguf) | Q2_K | 5.59GB |
| [Twice-KoSOLAR-16.1B-instruct-test.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.IQ3_XS.gguf) | IQ3_XS | 6.21GB |
| [Twice-KoSOLAR-16.1B-instruct-test.IQ3_S.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.IQ3_S.gguf) | IQ3_S | 6.55GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q3_K_S.gguf) | Q3_K_S | 6.52GB |
| [Twice-KoSOLAR-16.1B-instruct-test.IQ3_M.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.IQ3_M.gguf) | IQ3_M | 6.77GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q3_K.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q3_K.gguf) | Q3_K | 7.25GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q3_K_M.gguf) | Q3_K_M | 7.25GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q3_K_L.gguf) | Q3_K_L | 7.89GB |
| [Twice-KoSOLAR-16.1B-instruct-test.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.IQ4_XS.gguf) | IQ4_XS | 8.14GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q4_0.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q4_0.gguf) | Q4_0 | 8.48GB |
| [Twice-KoSOLAR-16.1B-instruct-test.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.IQ4_NL.gguf) | IQ4_NL | 8.58GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q4_K_S.gguf) | Q4_K_S | 8.55GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q4_K.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q4_K.gguf) | Q4_K | 9.03GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q4_K_M.gguf) | Q4_K_M | 9.03GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q4_1.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q4_1.gguf) | Q4_1 | 9.41GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q5_0.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q5_0.gguf) | Q5_0 | 10.33GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q5_K_S.gguf) | Q5_K_S | 10.33GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q5_K.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q5_K.gguf) | Q5_K | 10.61GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q5_K_M.gguf) | Q5_K_M | 10.61GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q5_1.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q5_1.gguf) | Q5_1 | 11.26GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q6_K.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q6_K.gguf) | Q6_K | 12.3GB |
| [Twice-KoSOLAR-16.1B-instruct-test.Q8_0.gguf](https://huggingface.co/RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf/blob/main/Twice-KoSOLAR-16.1B-instruct-test.Q8_0.gguf) | Q8_0 | 15.93GB |
Original model description:
---
language:
- en
- ko
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
tags:
- merge
---
# **Twice-KoSOLAR-16.1B-instruct-test**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**๋ชจ๋ธ ๋ชฉ์ **
<img src='./solar.png'>
์ต๊ทผ, SOLAR-10.7B ๋ชจ๋ธ์ด [Depth-Up-Scaling](https://arxiv.org/pdf/2312.15166.pdf)(์์ ์ฌ์ง) ๋ฐฉ๋ฒ๋ก ์ ๋ด์ธ์์ LLM ๋ฆฌ๋๋ณด๋์์ ์ข์ ์ฑ๋ฅ์ ๋ณด์ด๊ณ ์๋ค. ๋๋ถ์ด์ `์ผ๋์`์์ ๋ง๋ `seungduk/KoSOLAR-10.7B-v0.1` ๋ชจ๋ธ์ Ko-LLM ๋ฆฌ๋๋ณด๋์ ํฐ ํ๊ธ๋ ฅ์ ๋ถ๋ฌ์ค๋ฉด์, ์์ผ๋ก์ ๋ฆฌ๋๋ณด๋์ ํ๋ฆ๋ ๋ฐ๋ ๊ฒ์ผ๋ก ์์๋๋ค.
์ฌ๊ธฐ์ ๋จ์ํ ํธ๊ธฐ์ฌ์ด ๋ค์๋ค. **Upstage์์ ๋ฐํํ Depth-Up-Scaling(DUS) ๋ฐฉ๋ฒ๋ก ์ mistral-7B ๋ชจ๋ธ 2๊ฐ๋ฅผ merge(passthrough)ํ ๋ฐฉ๋ฒ**์ด๋ค.
์ด๋ ๋๋๊ฒ๋, DUS ๋ฐฉ๋ฒ๋ก ์ ์ ์ฉํ `upstage/SOLAR-10.7B-v1.0`๋ชจ๋ธ์ ๊ธฐ์กด์ mistral-7B ๋ชจ๋ธ๋ณด๋ค ๋ฆฌ๋๋ณด๋์์ ๋์ ์ฑ๋ฅ์ ๊ธฐ๋กํ๋ค. (์๋์ ํ
์ด๋ธ ์ฐธ๊ณ )
๊ทธ๋ ๋ค๋ฉด, DUS ๋ฐฉ๋ฒ๋ก ์ ์ ํ์์ด, ๋ค๋ฅธ ๋ชจ๋ธ์ ์ ์ฉํ๋ฉด ๋๊ฐ์ ๊ฒฐ๊ณผ๊ฐ ๋ฐ์ํ ์ง ๋๋ฌด๋ ๊ถ๊ธํ๋ค. ๐
์คํ์ ํตํด์ ๋์ ํธ๊ธฐ์ฌ์ ๋ํ ๊ฒฐ๋ก ์ ๋ด๋ ค๋ณด๊ณ ์ ํ๋ค. ๐๐
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| [seungduk/KoSOLAR-10.7B-v0.1](https://huggingface.co/seungduk/KoSOLAR-10.7B-v0.1) | **66.04** | 62.03 | 84.54 | 65.56 | 45.03 | 83.58 | 55.50 |
| [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | **66.04** | 61.95 | 84.60 | 65.48 | 45.04 | 83.66 | 55.50 |
| [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 |
> Follow up as [En-link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
**Method**
Instruction-tuning.
**Hyperparameters**
```python
python finetune.py \
--base_model PracticeLLM/Twice-KoSOLAR-16.1B-test \
--data-path kyujinpy/KOR-OpenOrca-Platypus-v3 \
--output_dir ./Twice-KoSOLAR-16.1B-instruct-test \
--batch_size 64 \
--micro_batch_size 1 \
--num_epochs 1 \
--learning_rate 3e-5 \
--cutoff_len 4096 \
--val_set_size 0 \
--lora_r 16 \
--lora_alpha 16 \
--lora_dropout 0.05 \
--lora_target_modules '[q_proj, k_proj, v_proj, o_proj, gate_proj, down_proj, up_proj, lm_head]' \
--train_on_inputs False \
--add_eos_token False \
--group_by_length False \
--prompt_template_name user_prompt \
--lr_scheduler 'cosine' \
#--warmup_steps 100 \
```
> Share all of things. It is my belief.
# **Model Benchmark**
## Open Ko-LLM leaderboard & lm-evaluation-harness(zero-shot)
- Follow up as [Ko-link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 |
| --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test | 53.64 | 52.30 | 59.98 | 53.42 | 44.07 | 58.44 |
| PracticeLLM/Twice-KoSOLAR-16.1B-test | 50.20 | 45.65 | 57.14 | 51.39 | 42.99 | 53.84 |
| [Megastudy/M-SOLAR-10.7B-v1.1-beta](https://huggingface.co/Megastudy/M-SOLAR-10.7B-v1.1-beta) | 55.25 | 51.71 | 60.86 | 54.24 | 47.12 | 62.34 |
| [jjourney1125/M-SOLAR-10.7B-v1.0](https://huggingface.co/jjourney1125/M-SOLAR-10.7B-v1.0) | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 |
| [seungduk/KoSOLAR-10.7B-v0.1](https://huggingface.co/seungduk/KoSOLAR-10.7B-v0.1) | 52.40 | 47.18 | 59.54 | 52.04 | 41.84 | 61.39 |
- Follow up as [beomi/LM-Harness](https://github.com/Beomi/ko-lm-evaluation-harness)
```
gpt2 (pretrained=PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5100|ยฑ |0.0133|
| | |macro_f1|0.3527|ยฑ |0.0079|
|kobest_copa | 0|acc |0.6740|ยฑ |0.0148|
| | |macro_f1|0.6732|ยฑ |0.0148|
|kobest_hellaswag| 0|acc |0.4640|ยฑ |0.0223|
| | |acc_norm|0.5480|ยฑ |0.0223|
| | |macro_f1|0.4585|ยฑ |0.0223|
|kobest_sentineg | 0|acc |0.6574|ยฑ |0.0238|
| | |macro_f1|0.6184|ยฑ |0.0253|
gpt2 (pretrained=PracticeLLM/Twice-KoSOLAR-16.1B-test), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.7201|ยฑ |0.0120|
| | |macro_f1|0.7073|ยฑ |0.0124|
|kobest_copa | 0|acc |0.6510|ยฑ |0.0151|
| | |macro_f1|0.6506|ยฑ |0.0151|
|kobest_hellaswag| 0|acc |0.4520|ยฑ |0.0223|
| | |acc_norm|0.5820|ยฑ |0.0221|
| | |macro_f1|0.4475|ยฑ |0.0222|
|kobest_sentineg | 0|acc |0.7078|ยฑ |0.0229|
| | |macro_f1|0.7071|ยฑ |0.0229|
gpt2 (pretrained=Megastudy/M-SOLAR-10.7B-v1.1-beta), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.7137|ยฑ |0.0121|
| | |macro_f1|0.6878|ยฑ |0.0128|
|kobest_copa | 0|acc |0.7060|ยฑ |0.0144|
| | |macro_f1|0.7054|ยฑ |0.0145|
|kobest_hellaswag| 0|acc |0.4620|ยฑ |0.0223|
| | |acc_norm|0.5360|ยฑ |0.0223|
| | |macro_f1|0.4595|ยฑ |0.0223|
|kobest_sentineg | 0|acc |0.7431|ยฑ |0.0220|
| | |macro_f1|0.7295|ยฑ |0.0230|
gpt2 (pretrained=jjourney1125/M-SOLAR-10.7B-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5228|ยฑ |0.0133|
| | |macro_f1|0.3788|ยฑ |0.0097|
|kobest_copa | 0|acc |0.6860|ยฑ |0.0147|
| | |macro_f1|0.6858|ยฑ |0.0147|
|kobest_hellaswag| 0|acc |0.4580|ยฑ |0.0223|
| | |acc_norm|0.5380|ยฑ |0.0223|
| | |macro_f1|0.4552|ยฑ |0.0222|
|kobest_sentineg | 0|acc |0.6474|ยฑ |0.0240|
| | |macro_f1|0.6012|ยฑ |0.0257|
gpt2 (pretrained=yanolja/KoSOLAR-10.7B-v0.1), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.8725|ยฑ |0.0089|
| | |macro_f1|0.8722|ยฑ |0.0089|
|kobest_copa | 0|acc |0.6850|ยฑ |0.0147|
| | |macro_f1|0.6844|ยฑ |0.0147|
|kobest_hellaswag| 0|acc |0.4340|ยฑ |0.0222|
| | |acc_norm|0.5840|ยฑ |0.0221|
| | |macro_f1|0.4296|ยฑ |0.0221|
|kobest_sentineg | 0|acc |0.7506|ยฑ |0.0217|
| | |macro_f1|0.7505|ยฑ |0.0217|
```
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- Refereces (Model Card)
# yanolja/KoSOLAR-10.7B-v0.1
This model is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0), trained on various Korean web-crawled datasets that are publicly available on HuggingFace.
The hypothesis was that while maintaining the original performance of the base model, we could add more tokens to the base model's vocabulary by training the embeddings for the new tokens only. The evaluation results seem to indicate that both English and Korean performances were preserved.
## Model Description
Most parameters of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) were frozen except for the embed_tokens layer and the lm_head layer. Embeddings for the existing tokens in those layers were frozen during training. The embeddings for the new tokens have been tuned.
---
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
# **Introduction**
We introduce SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
We present a methodology for scaling LLMs called depth up-scaling (DUS) , which encompasses architectural modifications and continued pretraining. In other words, we integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements ([SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)).
For full details of this model please read our [paper](https://arxiv.org/abs/2312.15166).
|
marip/NASAC2-KLUE-BERT-FINETUNNING_v1_20240702 | marip | 2024-07-02T13:08:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T13:08:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gokulsrinivasagan/gpt_train_2_768_new | gokulsrinivasagan | 2024-07-02T13:08:50Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:08:50Z | Entry not found |
gokulsrinivasagan/gpt_train_12_384_new | gokulsrinivasagan | 2024-07-02T13:09:02Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:09:02Z | Entry not found |
COPA/WL-url-text-class | COPA | 2024-07-02T13:11:36Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-07-02T13:09:07Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# COPA/WL-url-text-class
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("COPA/WL-url-text-class")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
olonok/phi-3-mini-128k-instruct-f16-gguf | olonok | 2024-07-02T13:15:27Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:09:20Z | Entry not found |
YongjieNiu/prior-rsLoRA-adl-cat-1-500 | YongjieNiu | 2024-07-02T15:18:36Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:SDXL_model",
"license:openrail++",
"region:us"
] | text-to-image | 2024-07-02T13:10:05Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: SDXL_model
instance_prompt: a photo of adl cat
widget:
- text: a photo of adl cat by the sea
output:
url: image_0.png
- text: a photo of adl cat by the sea
output:
url: image_1.png
- text: a photo of adl cat by the sea
output:
url: image_2.png
- text: a photo of adl cat by the sea
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - YongjieNiu/prior-rsLoRA-adl-cat-1-500
<Gallery />
## Model description
These are YongjieNiu/prior-rsLoRA-adl-cat-1-500 LoRA adaption weights for SDXL_model.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: VAE.
## Trigger words
You should use a photo of adl cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](YongjieNiu/prior-rsLoRA-adl-cat-1-500/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
geonheechoi22/llama-2-ko-7b-Q4_K_M-GGUF | geonheechoi22 | 2024-07-02T13:10:36Z | 0 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"kollama",
"llama-2-ko",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ko",
"base_model:beomi/llama-2-ko-7b",
"region:us"
] | text-generation | 2024-07-02T13:10:18Z | ---
base_model: beomi/llama-2-ko-7b
language:
- en
- ko
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- kollama
- llama-2-ko
- llama-cpp
- gguf-my-repo
inference: false
---
# geonheechoi22/llama-2-ko-7b-Q4_K_M-GGUF
This model was converted to GGUF format from [`beomi/llama-2-ko-7b`](https://huggingface.co/beomi/llama-2-ko-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/beomi/llama-2-ko-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo geonheechoi22/llama-2-ko-7b-Q4_K_M-GGUF --hf-file llama-2-ko-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo geonheechoi22/llama-2-ko-7b-Q4_K_M-GGUF --hf-file llama-2-ko-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo geonheechoi22/llama-2-ko-7b-Q4_K_M-GGUF --hf-file llama-2-ko-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo geonheechoi22/llama-2-ko-7b-Q4_K_M-GGUF --hf-file llama-2-ko-7b-q4_k_m.gguf -c 2048
```
|
maxseats/SungBeom-whisper-small-ko-set17 | maxseats | 2024-07-02T13:12:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-recognition",
"ko",
"dataset:maxseats/aihub-464-preprocessed-680GB-set-17",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T13:12:11Z |
---
language: ko
tags:
- whisper
- speech-recognition
datasets:
- maxseats/aihub-464-preprocessed-680GB-set-17
metrics:
- cer
---
# Model Name : maxseats/SungBeom-whisper-small-ko-set16
# Description
- ํ์ธํ๋ ๋ฐ์ดํฐ์
: maxseats/aihub-464-preprocessed-680GB-set-17
# ์ค๋ช
- AI hub์ ์ฃผ์ ์์ญ๋ณ ํ์ ์์ฑ ๋ฐ์ดํฐ์
์ ํ์ต ์ค์ด์์.
- 680GB ์ค set_0~16 ๋ฐ์ดํฐ(170GB)๊น์ง ํ์ธํ๋ํ ๋ชจ๋ธ์ ๋ถ๋ฌ์์, set_17 ๋ฐ์ดํฐ(10GB)๋ฅผ ํ์ตํ ๋ชจ๋ธ์
๋๋ค.
- ๋งํฌ : https://huggingface.co/datasets/maxseats/aihub-464-preprocessed-680GB-set-17
|
Trelis/multi-qa-MiniLM-L6-dot-v1-ft-pairs-4-cst-epoch-s1-overlap | Trelis | 2024-07-02T13:12:52Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:211",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/multi-qa-MiniLM-L6-dot-v1",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-07-02T13:12:46Z | ---
base_model: sentence-transformers/multi-qa-MiniLM-L6-dot-v1
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:211
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What happens if a player in possession enters the defending team's
seven-metre zone?
sentences:
- 10. 8 if a touch is made in the in - goal area before the ball is grounded, the
player in possession is to perform a rollball seven ( 7 ) metres from the team
โ s attacking try line, provided it is not the sixth touch and the player is not
half. 10. 9 if a player in possession is touched while on or behind their defending
try line, the touch counts and once the referee sets the mark seven ( 7 ) metres
directly forward of the contact point from the defending team โ s try line, a
rollball is performed. 10. 10 if a player in possession intentionally makes a
touch on an offside defender who is making every effort to retire and remain out
of play, the touch counts. fit playing rules - 5th edition copyright ยฉ touch football
australia 2020 9 10. 11 if a touch is made on a player in possession while the
player is juggling the ball in an attempt to maintain control of it, the touch
counts if the attacking player following the touch retains possession.
- 9. 2 on the change of possession due to an intercept, the first touch will be
zero ( 0 ) touch. 9. 3 following the sixth touch or a loss of possession due to
any other means, the ball must be returned to the mark without delay. ruling =
a deliberate delay in the changeover procedure will result in a penalty awarded
to the non - offending team ten ( 10 ) metres forward of the mark for the change
of possession. 9. 4 if the ball is dropped or passed and goes to ground during
play, a change of possession results. ruling = the mark for the change of possession
is where the ball makes initial contact with the ground. 9. 5 if the ball, while
still under the control of the half, contacts the ground in the in - goal area,
possession is lost. ruling = play will restart with a rollball at the nearest
point on the seven ( 7 ) metre line. fit playing rules - 5th edition 8 copyright
ยฉ touch football australia 2020 9. 6 if a player mishandles the ball and even
if in an effort to gain control, the ball is accidentally knocked forward into
any other player, a change of possession results.
- fit playing rules - 5th edition copyright ยฉ touch football australia 2020 9 10.
11 if a touch is made on a player in possession while the player is juggling the
ball in an attempt to maintain control of it, the touch counts if the attacking
player following the touch retains possession. 10. 12 if a player in possession
is touched and subsequently makes contact with either the sideline, a field marker
or the ground outside the field of play, the touch counts and play continues with
a rollball at the mark where the touch occurred. 10. 13 when a player from the
defending team enters its defensive seven metre zone, the defending team must
move forward at a reasonable pace until a touch is imminent or made. ruling =
a penalty to the attacking team at the point of the infringement. 10. 14 when
a player in possession enters the defending teams โ seven metre zone the defending
team is not obliged to move forward but cannot retire back towards their try line
until a touch is imminent or made. ruling = a penalty to the attacking team at
the seven ( 7 ) metre line in line with the point of the infringement.
- source_sentence: What is the maximum number of touches a team can have before a
change of possession?
sentences:
- touch count the progressive number of touches that each team has before a change
of possession, from zero ( 0 ) to six ( 6 ). try the result of any attacking player,
except the half, placing the ball on or over the team โ s attacking try line before
being touched. try lines the lines separating the in - goal areas from the field
of play. see appendix 1. voluntary rollball the player in possession performs
a rollball before a touch is made with a defending player. wing the player outside
the link player. winner the team that scores the most tries during the match.
fit playing rules - 5th edition 4 copyright ยฉ touch football australia 2020 rules
of play mode of play the object of the game of touch is for each team to score
tries and to prevent the opposition from scoring. the ball may be passed, knocked
or handed between players of the attacking team who may in turn run or otherwise
move with the ball in an attempt to gain territorial advantage and to score tries.
defending players prevent the attacking team from gaining a territorial advantage
by touching the ball carrier.
- 4. 3. 1 identifying numbers must feature no more than two ( 2 ) digits. 4. 4 hats
or caps are permitted to be worn during a match provided they are safe and meet
any nta regulations. 4. 5 safe footwear must be worn with exceptions allowed for
game variants such as beach touch. 4. 6 light leather or synthetic boots with
soft moulded soles are permitted. 4. 6. 1 shoes with screw - in studs are not
to be worn by any player or referee. 4. 7 players are not to participate in any
match wearing any item of jewellery, chain, identification band / bracelet or
similar item that may prove dangerous. any jewellery or other items that cannot
be removed are to be taped to the satisfaction of the referee. 4. 8 long ( extend
beyond the finger flesh when viewed from the palm ) or sharp fingernails are not
allowed. 4. 9 referees and players may wear spectacles or sunglasses provided
they are safe and securely attached. 4. 10 referees and players may wear sport
monitoring equipment and medical supports such as knee or ankle braces provided,
at the sole discretion of competition โ s controlling body, the items are not
dangerous.
- a player with both feet on or behind their defending try line. pass the act of
changing possession between individual attacking players by propelling the ball
laterally and / or backwards and may include a flick, knock or throw. perimeter
a border not less than five ( 5 ) metres from the boundary of the field of play.
see appendix 1. penalty the ruling by a referee to award a tap when a player or
team infringes the rules of the game. possession refers to the player or team
that has control of the ball. providing other rules do not apply, the team with
the ball is entitled to six ( 6 ) touches. referee the match official ( s ) appointed
to make rulings during the conduct of a match. rollball the act of bringing the
ball into play following a touch or a change of possession. ruck / rollball area
the area, not exceeding one ( 1 ) metre in distance, between the player performing
a rollball and the half. ruling the decision made by a referee as a result of
particular circumstance and may result in a play on, a tap penalty, a discipline
option, change of possession or a try. seven metre zone the area between the seven
( 7 ) metre line and the try line.
- source_sentence: What is the definition of 'forward' in Touch Rugby?
sentences:
- end of play when the referee indicates completion of the match. exclusion when
a player is sent to the nearest sin bin area following three ( 3 ) penalties by
the defending team upon entering their seven metre zone. the player is counted
as a player on the field of play and cannot be replaced or interchanged. fit playing
rules - 5th edition copyright ยฉ touch football australia 2020 1 fit federation
of international touch field of play the playing area bounded by the sidelines
and dead ball lines, both of which are out of bounds. see appendix 1. forced interchange
when a player is required to undertake a compulsory interchange for an infringement
ruled more serious than a penalty but less serious than a permanent interchange,
sin bin or dismissal. forward a position or direction towards the dead ball line
beyond the team โ s attacking try line. full time the expiration of the second
period of time allowed for play. half the player who takes possession following
a rollball. half time the break in play between the two halves of a match. imminent
about to occur, it is almost certain to occur. infringement the action of a player
contrary to the rules of the game.
- fit playing rules - 5th edition copyright ยฉ touch football australia 2020 9 10.
11 if a touch is made on a player in possession while the player is juggling the
ball in an attempt to maintain control of it, the touch counts if the attacking
player following the touch retains possession. 10. 12 if a player in possession
is touched and subsequently makes contact with either the sideline, a field marker
or the ground outside the field of play, the touch counts and play continues with
a rollball at the mark where the touch occurred. 10. 13 when a player from the
defending team enters its defensive seven metre zone, the defending team must
move forward at a reasonable pace until a touch is imminent or made. ruling =
a penalty to the attacking team at the point of the infringement. 10. 14 when
a player in possession enters the defending teams โ seven metre zone the defending
team is not obliged to move forward but cannot retire back towards their try line
until a touch is imminent or made. ruling = a penalty to the attacking team at
the seven ( 7 ) metre line in line with the point of the infringement.
- 10. 8 if a touch is made in the in - goal area before the ball is grounded, the
player in possession is to perform a rollball seven ( 7 ) metres from the team
โ s attacking try line, provided it is not the sixth touch and the player is not
half. 10. 9 if a player in possession is touched while on or behind their defending
try line, the touch counts and once the referee sets the mark seven ( 7 ) metres
directly forward of the contact point from the defending team โ s try line, a
rollball is performed. 10. 10 if a player in possession intentionally makes a
touch on an offside defender who is making every effort to retire and remain out
of play, the touch counts. fit playing rules - 5th edition copyright ยฉ touch football
australia 2020 9 10. 11 if a touch is made on a player in possession while the
player is juggling the ball in an attempt to maintain control of it, the touch
counts if the attacking player following the touch retains possession.
- source_sentence: What happens if neither team is leading at the end of the two-minute
period of extra time?
sentences:
- infringement the action of a player contrary to the rules of the game. in - goal
area the area in the field of play bounded by the sidelines, the try lines and
the dead ball lines. there are two ( 2 ), one ( 1 ) at each end of the field of
play. see appendix 1. interchange the act of an on - field player leaving the
field of play to be replaced by an off - field player entering the field of play.
interchange area a marked rectangle for each team on opposite sides of the field
of play usually measuring 20 metres long by no more than five ( 5 ) metres wide,
extending ten ( 10 ) metres either side of the halfway line and not less than
one ( 1 ) metre from the sideline. it is the area in which all off - field players
must remain until an interchange is initiated. see appendix 1. kick strike or
propel forcibly with the foot, a blow or forceful thrust with the foot to the
ball. a tap to commence or recommence play or a penalty tap is not defined as
a kick. line markings markings of the field of play. see appendix 1. link the
player beside the wing player.
- touch count the progressive number of touches that each team has before a change
of possession, from zero ( 0 ) to six ( 6 ). try the result of any attacking player,
except the half, placing the ball on or over the team โ s attacking try line before
being touched. try lines the lines separating the in - goal areas from the field
of play. see appendix 1. voluntary rollball the player in possession performs
a rollball before a touch is made with a defending player. wing the player outside
the link player. winner the team that scores the most tries during the match.
fit playing rules - 5th edition 4 copyright ยฉ touch football australia 2020 rules
of play mode of play the object of the game of touch is for each team to score
tries and to prevent the opposition from scoring. the ball may be passed, knocked
or handed between players of the attacking team who may in turn run or otherwise
move with the ball in an attempt to gain territorial advantage and to score tries.
defending players prevent the attacking team from gaining a territorial advantage
by touching the ball carrier.
- 24. 1. 2 the drop - off commences with a tap from the centre of the halfway line
by the team that did not commence the match with possession. 24. 1. 3 the drop
- off will commence with a two ( 2 ) minute period of extra time. 24. 1. 4 should
a team be leading at the expiration of the two ( 2 ) minute period of extra time
then that team will be declared the winner and match complete. 24. 1. 5 should
neither team be leading at the expiration of two ( 2 ) minutes, a signal is given
and the match will pause at the next touch or dead ball. each team will then remove
another player from the field of play. 24. 1. 6 the match will recommence immediately
after the players have left the field at the same place where it paused ( i. e.
the team retains possession at the designated number of touches, or at change
of possession due to some infringement or the sixth touch ) and the match will
continue until a try is scored. 24. 1. 7 there is no time off during the drop
- off and the clock does not stop at the two ( 2 ) minute interval.
- source_sentence: What happens if a team is leading at the end of the two-minute
period of extra time?
sentences:
- 24. 1. 2 the drop - off commences with a tap from the centre of the halfway line
by the team that did not commence the match with possession. 24. 1. 3 the drop
- off will commence with a two ( 2 ) minute period of extra time. 24. 1. 4 should
a team be leading at the expiration of the two ( 2 ) minute period of extra time
then that team will be declared the winner and match complete. 24. 1. 5 should
neither team be leading at the expiration of two ( 2 ) minutes, a signal is given
and the match will pause at the next touch or dead ball. each team will then remove
another player from the field of play. 24. 1. 6 the match will recommence immediately
after the players have left the field at the same place where it paused ( i. e.
the team retains possession at the designated number of touches, or at change
of possession due to some infringement or the sixth touch ) and the match will
continue until a try is scored. 24. 1. 7 there is no time off during the drop
- off and the clock does not stop at the two ( 2 ) minute interval.
- 7. 7 the tap to commence or recommence play must be performed without delay. ruling
= a penalty to the non - offending team at the centre of the halfway line. 8 match
duration 8. 1 a match is 40 minutes in duration, consisting of two ( 2 ) x 20
minute halves with a half time break. 8. 1. 1 there is no time off for injury
during a match. 8. 2 local competition and tournament conditions may vary the
duration of a match. 8. 3 when time expires, play is to continue until the next
touch or dead ball and end of play is signaled by the referee. 8. 3. 1 should
a penalty be awarded during this period, the penalty is to be taken. 8. 4 if a
match is abandoned in any circumstances other than those referred to in clause
24. 1. 6 the nta or nta competition provider in its sole discretion shall determine
the result of the match. 9 possession 9. 1 the team with the ball is entitled
to six ( 6 ) touches prior to a change of possession. 9. 2 on the change of possession
due to an intercept, the first touch will be zero ( 0 ) touch.
- '12. 6 if a player from the defending team unintentionally makes contact with
the ball in flight and the ball is retrieved by an attacking player, play and
the touch count continues. 12. 7 a player from the attacking team cannot pass
the ball into a defending player intentionally seeking a rebound or a restart
of the touch count. ruling = a penalty to the defending team at the point of the
pass. 13 the rollball 13. 1 the attacking player is to position on the mark, face
the opponent โ s try line, make a genuine attempt to stand parallel to the sidelines,
place the ball on the ground between the feet in a controlled manner and : 13.
1. 1 step forward over the ball ; or 13. 1. 2 roll the ball back between the feet
no more than one ( 1 ) metre ; or 13. 1. 3 pass a foot over the ball. ruling =
a change of possession to the defending team at the point of the infringement.
13. 2 a player must perform the rollball on the mark. ruling = a penalty to the
defending team at the point of the infringement. 13. 3 a player must not perform
a voluntary rollball.'
---
# SentenceTransformer based on sentence-transformers/multi-qa-MiniLM-L6-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/multi-qa-MiniLM-L6-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-dot-v1). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/multi-qa-MiniLM-L6-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-dot-v1) <!-- at revision c3bdeb02464bc83f9b85156a3386a50bfbf3e6a8 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Dot Product
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("Trelis/multi-qa-MiniLM-L6-dot-v1-ft-pairs-4-cst-epoch-s1-overlap")
# Run inference
sentences = [
'What happens if a team is leading at the end of the two-minute period of extra time?',
'24. 1. 2 the drop - off commences with a tap from the centre of the halfway line by the team that did not commence the match with possession. 24. 1. 3 the drop - off will commence with a two ( 2 ) minute period of extra time. 24. 1. 4 should a team be leading at the expiration of the two ( 2 ) minute period of extra time then that team will be declared the winner and match complete. 24. 1. 5 should neither team be leading at the expiration of two ( 2 ) minutes, a signal is given and the match will pause at the next touch or dead ball. each team will then remove another player from the field of play. 24. 1. 6 the match will recommence immediately after the players have left the field at the same place where it paused ( i. e. the team retains possession at the designated number of touches, or at change of possession due to some infringement or the sixth touch ) and the match will continue until a try is scored. 24. 1. 7 there is no time off during the drop - off and the clock does not stop at the two ( 2 ) minute interval.',
'7. 7 the tap to commence or recommence play must be performed without delay. ruling = a penalty to the non - offending team at the centre of the halfway line. 8 match duration 8. 1 a match is 40 minutes in duration, consisting of two ( 2 ) x 20 minute halves with a half time break. 8. 1. 1 there is no time off for injury during a match. 8. 2 local competition and tournament conditions may vary the duration of a match. 8. 3 when time expires, play is to continue until the next touch or dead ball and end of play is signaled by the referee. 8. 3. 1 should a penalty be awarded during this period, the penalty is to be taken. 8. 4 if a match is abandoned in any circumstances other than those referred to in clause 24. 1. 6 the nta or nta competition provider in its sole discretion shall determine the result of the match. 9 possession 9. 1 the team with the ball is entitled to six ( 6 ) touches prior to a change of possession. 9. 2 on the change of possession due to an intercept, the first touch will be zero ( 0 ) touch.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: constant
- `warmup_ratio`: 0.3
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: constant
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.3
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss |
|:------:|:----:|:-------------:|:------:|
| 0.2857 | 2 | 1.7892 | - |
| 0.5714 | 4 | 1.5998 | 1.3002 |
| 0.8571 | 6 | 1.5637 | - |
| 1.1429 | 8 | 1.3347 | 1.1748 |
| 1.4286 | 10 | 1.4256 | - |
| 1.7143 | 12 | 1.2205 | 1.1085 |
| 2.0 | 14 | 1.1307 | - |
| 2.2857 | 16 | 1.119 | 1.0558 |
| 2.5714 | 18 | 1.2639 | - |
| 2.8571 | 20 | 1.2834 | 1.0108 |
| 3.1429 | 22 | 0.9248 | - |
| 3.4286 | 24 | 1.1527 | 1.0074 |
| 3.7143 | 26 | 0.8702 | - |
| 4.0 | 28 | 0.725 | 1.0124 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.1.1+cu121
- Accelerate: 0.31.0
- Datasets: 2.17.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
vaibhavtalekar87/bert-finetuned-ner | vaibhavtalekar87 | 2024-07-02T13:25:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T13:13:06Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9415288332588287
- name: Recall
type: recall
value: 0.9448232549793648
- name: F1
type: f1
value: 0.943173167345842
- name: Accuracy
type: accuracy
value: 0.9856213575086831
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0687
- Precision: 0.9415
- Recall: 0.9448
- F1: 0.9432
- Accuracy: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.081 | 1.0 | 1756 | 0.0693 | 0.9154 | 0.9276 | 0.9214 | 0.9808 |
| 0.0363 | 2.0 | 3512 | 0.0714 | 0.9371 | 0.9364 | 0.9368 | 0.9843 |
| 0.0214 | 3.0 | 5268 | 0.0687 | 0.9415 | 0.9448 | 0.9432 | 0.9856 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
bartowski/Phi-3.1-mini-4k-instruct-GGUF | bartowski | 2024-07-03T01:16:42Z | 0 | 10 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us"
] | text-generation | 2024-07-02T13:13:22Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.0
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Phi-3.1-mini-4k-instruct
<b>I'm calling this Phi-3.1 because Microsoft made the decision to release a huge update in place.. So yes, it's the new model from July 2nd 2024, but I've renamed it for clarity.</b>
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3278">b3278</a> for quantization.
Original model: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Experimental quants are made with `--output-tensor-type f16 --token-embedding-type f16` per [ZeroWw](https://huggingface.co/ZeroWw)'s suggestion, please provide any feedback on quality differences you spot.
## Prompt format
```
<|system|> {system_prompt}<|end|><|user|> {prompt}<|end|><|assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Phi-3.1-mini-4k-instruct-Q8_0_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q8_1.gguf) | Q8_0_L | 4.24GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. |
| [Phi-3.1-mini-4k-instruct-Q8_0.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q8_0.gguf) | Q8_0 | 4.06GB | Extremely high quality, generally unneeded but max available quant. |
| [Phi-3.1-mini-4k-instruct-Q6_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q6_K_L.gguf) | Q6_K_L | 3.36GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. |
| [Phi-3.1-mini-4k-instruct-Q6_K.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q6_K.gguf) | Q6_K | 3.13GB | Very high quality, near perfect, *recommended*. |
| [Phi-3.1-mini-4k-instruct-Q5_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q5_K_L.gguf) | Q5_K_L | 3.06GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
| [Phi-3.1-mini-4k-instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q5_K_M.gguf) | Q5_K_M | 2.81GB | High quality, *recommended*. |
| [Phi-3.1-mini-4k-instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q5_K_S.gguf) | Q5_K_S | 2.64GB | High quality, *recommended*. |
| [Phi-3.1-mini-4k-instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q4_K_L.gguf) | Q4_K_L | 2.65GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Phi-3.1-mini-4k-instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q4_K_M.gguf) | Q4_K_M | 2.39GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Phi-3.1-mini-4k-instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q4_K_S.gguf) | Q4_K_S | 2.18GB | Slightly lower quality with more space savings, *recommended*. |
| [Phi-3.1-mini-4k-instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-IQ4_XS.gguf) | IQ4_XS | 2.05GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Phi-3.1-mini-4k-instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q3_K_XL.gguf) | Q3_K_XL | 2.35GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Lower quality but usable, good for low RAM availability. |
| [Phi-3.1-mini-4k-instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q3_K_L.gguf) | Q3_K_L | 2.08GB | Lower quality but usable, good for low RAM availability. |
| [Phi-3.1-mini-4k-instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q3_K_M.gguf) | Q3_K_M | 1.95GB | Even lower quality. |
| [Phi-3.1-mini-4k-instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-IQ3_M.gguf) | IQ3_M | 1.85GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Phi-3.1-mini-4k-instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q3_K_S.gguf) | Q3_K_S | 1.68GB | Low quality, not recommended. |
| [Phi-3.1-mini-4k-instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-IQ3_XS.gguf) | IQ3_XS | 1.62GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Phi-3.1-mini-4k-instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-IQ3_XXS.gguf) | IQ3_XXS | 1.51GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Phi-3.1-mini-4k-instruct-Q2_K.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-Q2_K.gguf) | Q2_K | 1.41GB | Very low quality but surprisingly usable. |
| [Phi-3.1-mini-4k-instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-IQ2_M.gguf) | IQ2_M | 1.31GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Phi-3.1-mini-4k-instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-IQ2_S.gguf) | IQ2_S | 1.21GB | Very low quality, uses SOTA techniques to be usable. |
| [Phi-3.1-mini-4k-instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Phi-3.1-mini-4k-instruct-GGUF/blob/main/Phi-3.1-mini-4k-instruct-IQ2_XS.gguf) | IQ2_XS | 1.15GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Phi-3.1-mini-4k-instruct-GGUF --include "Phi-3.1-mini-4k-instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Phi-3.1-mini-4k-instruct-GGUF --include "Phi-3.1-mini-4k-instruct-Q8_0.gguf/*" --local-dir Phi-3.1-mini-4k-instruct-Q8_0
```
You can either specify a new local-dir (Phi-3.1-mini-4k-instruct-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Meziane/question_answering_T5_seq_to_seq_med_dataset | Meziane | 2024-07-02T13:15:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"question-answering",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | question-answering | 2024-07-02T13:13:22Z | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: question_answering_T5_seq_to_seq_med_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_answering_T5_seq_to_seq_med_dataset
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Dhanushlevi/sd_martin_valen-model-v1-2_400_demo | Dhanushlevi | 2024-07-02T13:13:34Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:13:33Z | Entry not found |
davidbowie42/sft_openassistant-guanaco | davidbowie42 | 2024-07-02T13:13:46Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:13:46Z | Entry not found |
John6666/raemu-xl-v4-sdxl-spo | John6666 | 2024-07-02T13:40:51Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"game",
"2.5D",
"SPO",
"base_model:Raelina/Raemu-XL-V4",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-07-02T13:14:21Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- game
- 2.5D
- SPO
base_model: Raelina/Raemu-XL-V4
---
Original model is [here](https://huggingface.co/Raelina/Raemu-XL-V4) and on [Civitai](https://civitai.com/models/270336?modelVersionId=613928).<br>
In addition to Animagine 3.1, characters of following anime / game series is supported.<br><br>
V1
- Fairy Tail
- Gabriel Dropout
- Blend S
- Nisekoi
- Youkoso Jitsuryoku Shijou no Kyoushitsu e
- Kanojo Okarishimasu
- Saenai Heroine no Sodatekata
- Mirai Nikki
- Guilty Crown
V2
- Zombie land saga
- Yuru yuri
- To love-ru
- Amagi brilliant park
- Avatar legends
- Highschool of the dead
- Infinite stratos
- Inuyasha
- Kara no kyoukai
- Monster musume no iru nichijou
- Seishun buta yarou
- Snk
- Tekken
- Wuthering waves
[Wildcard character from Rae Diffusion XL V2 is here](https://huggingface.co/Raelina/Rae-Diffusion-XL-V2/tree/main/wildcard). |
mayarmostafa/videomae-base-finetuned-bleeding-exp_6 | mayarmostafa | 2024-07-02T14:45:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-07-02T13:14:43Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-bleeding-exp_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-bleeding-exp_6
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 6000
### Framework versions
- Transformers 4.40.2
- Pytorch 1.12.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/blossom-v5-32b-i1-GGUF | mradermacher | 2024-07-02T18:41:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"zh",
"en",
"dataset:Azure99/blossom-chat-v3",
"dataset:Azure99/blossom-math-v4",
"dataset:Azure99/blossom-wizard-v3",
"dataset:Azure99/blossom-orca-v3",
"base_model:Azure99/blossom-v5-32b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:15:00Z | ---
base_model: Azure99/blossom-v5-32b
datasets:
- Azure99/blossom-chat-v3
- Azure99/blossom-math-v4
- Azure99/blossom-wizard-v3
- Azure99/blossom-orca-v3
language:
- zh
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Azure99/blossom-v5-32b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/blossom-v5-32b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ2_S.gguf) | i1-IQ2_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q2_K.gguf) | i1-Q2_K | 12.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ3_S.gguf) | i1-IQ3_S | 14.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ3_M.gguf) | i1-IQ3_M | 14.8 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q4_0.gguf) | i1-Q4_0 | 18.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/blossom-v5-32b-i1-GGUF/resolve/main/blossom-v5-32b.i1-Q6_K.gguf) | i1-Q6_K | 26.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
stablediffusionapi/landscaperealisticpro | stablediffusionapi | 2024-07-02T13:23:45Z | 0 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-07-02T13:15:08Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "landscaperealisticpro"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/landscaperealisticpro)
Model link: [View model](https://modelslab.com/models/landscaperealisticpro)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "landscaperealisticpro",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Saxo/Linkbricks-Horizon-AI-Ko-llama3-Instruct-dpo-8B-knowledge-expand | Saxo | 2024-07-02T13:25:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"dataset:Saxo/ko_wiki_qa_linkbricks_dataset_for_llama3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:15:33Z | Invalid username or password. |
NikolayKozloff/Viking-13B-Q8_0-GGUF | NikolayKozloff | 2024-07-02T13:22:01Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-13B",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T13:15:34Z | ---
base_model: LumiOpen/Viking-13B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/Viking-13B-Q8_0-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-13B`](https://huggingface.co/LumiOpen/Viking-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-13B-Q8_0-GGUF --hf-file viking-13b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-13B-Q8_0-GGUF --hf-file viking-13b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-13B-Q8_0-GGUF --hf-file viking-13b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-13B-Q8_0-GGUF --hf-file viking-13b-q8_0.gguf -c 2048
``` |
oleshy/ontochem_biobert_300_1 | oleshy | 2024-07-02T13:43:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-base-cased-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T13:15:50Z | ---
base_model: dmis-lab/biobert-base-cased-v1.1
tags:
- generated_from_trainer
model-index:
- name: ontochem_biobert_300_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ontochem_biobert_300_1
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 1.1439 |
| No log | 2.0 | 10 | 1.1388 |
| No log | 3.0 | 15 | 1.1303 |
| No log | 4.0 | 20 | 1.1182 |
| No log | 5.0 | 25 | 1.1027 |
| No log | 6.0 | 30 | 1.0839 |
| No log | 7.0 | 35 | 1.0618 |
| No log | 8.0 | 40 | 1.0368 |
| No log | 9.0 | 45 | 1.0089 |
| No log | 10.0 | 50 | 0.9781 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
lombardata/DinoVdeau-large-2024_07_02-batch-size32_epochs150_freeze | lombardata | 2024-07-03T01:24:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:15:52Z | Entry not found |
Galaxy67/3d-icon-SDXL-LoRA | Galaxy67 | 2024-07-02T13:17:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:17:36Z | Entry not found |
joswin03/llama-2-7b-Medical | joswin03 | 2024-07-02T13:25:10Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:17:46Z | Entry not found |
BoraErsoy2/food_classifier | BoraErsoy2 | 2024-07-02T13:50:03Z | 0 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-02T13:18:18Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: BoraErsoy2/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BoraErsoy2/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3797
- Validation Loss: 0.3267
- Train Accuracy: 0.921
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8218 | 1.6202 | 0.847 | 0 |
| 1.2200 | 0.7952 | 0.906 | 1 |
| 0.6871 | 0.4814 | 0.923 | 2 |
| 0.4762 | 0.4180 | 0.911 | 3 |
| 0.3797 | 0.3267 | 0.921 | 4 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
YesaOuO/GPrompT2 | YesaOuO | 2024-07-02T13:19:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | feature-extraction | 2024-07-02T13:18:55Z | Entry not found |
rajparmar/Llama-2-7b-chat-finetune | rajparmar | 2024-07-02T14:07:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:19:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Spierocho/qwen2_0.5B-creative-4bit_v0.1_gguf | Spierocho | 2024-07-02T13:21:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:cognitivecomputations/dolphin-2.9.3-qwen2-0.5b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:20:33Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
base_model: cognitivecomputations/dolphin-2.9.3-qwen2-0.5b
---
# Uploaded model
- **Developed by:** Spierocho
- **License:** apache-2.0
- **Finetuned from model :** cognitivecomputations/dolphin-2.9.3-qwen2-0.5b
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
iach/phi3_ft | iach | 2024-07-02T14:37:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-07-02T13:20:48Z | Entry not found |
axolotl-ai-co/gemma-2-9b | axolotl-ai-co | 2024-07-02T13:31:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:21:09Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, youโre required to review and agree to
Googleโs usage license. To do this, please ensure youโre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
habulaj/7079952121 | habulaj | 2024-07-02T13:21:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:21:14Z | Entry not found |
Spierocho/qwen2_0.5B-creative-4bit_v0.1_merge | Spierocho | 2024-07-02T13:21:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:cognitivecomputations/dolphin-2.9.3-qwen2-0.5b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-02T13:21:22Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
base_model: cognitivecomputations/dolphin-2.9.3-qwen2-0.5b
---
# Uploaded model
- **Developed by:** Spierocho
- **License:** apache-2.0
- **Finetuned from model :** cognitivecomputations/dolphin-2.9.3-qwen2-0.5b
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Stopwolf/Mustra-7B-v3 | Stopwolf | 2024-07-02T13:23:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:21:25Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/162023140029 | habulaj | 2024-07-02T13:21:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:21:27Z | Entry not found |
silmi224/finetune-led-35000 | silmi224 | 2024-07-02T13:37:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"led",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-07-02T13:21:47Z | ---
tags:
- summarization
- generated_from_trainer
model-index:
- name: finetune-led-35000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-led-35000
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8474
- Rouge1 Precision: 0.2576
- Rouge1 Recall: 0.3438
- Rouge1 Fmeasure: 0.2911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Fmeasure | Rouge1 Precision | Rouge1 Recall |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|:----------------:|:-------------:|
| 2.0001 | 0.01 | 10 | 2.8444 | 0.2732 | 0.2484 | 0.3213 |
| 1.7409 | 0.01 | 20 | 2.6895 | 0.2682 | 0.2375 | 0.3203 |
| 1.6096 | 0.02 | 30 | 2.6335 | 0.2797 | 0.246 | 0.3362 |
| 1.7452 | 0.03 | 40 | 2.5817 | 0.285 | 0.251 | 0.3417 |
| 1.6212 | 0.03 | 50 | 2.5765 | 0.2899 | 0.253 | 0.3515 |
| 1.6743 | 0.04 | 60 | 2.4425 | 0.2705 | 0.241 | 0.3188 |
| 1.5229 | 0.04 | 70 | 2.5146 | 0.2802 | 0.2501 | 0.3294 |
| 1.5681 | 0.05 | 80 | 2.4785 | 0.2665 | 0.2302 | 0.3273 |
| 1.5811 | 0.06 | 90 | 2.3622 | 0.287 | 0.2577 | 0.3349 |
| 1.5642 | 0.06 | 100 | 2.4053 | 0.2779 | 0.2449 | 0.3322 |
| 1.5175 | 0.07 | 110 | 2.3818 | 0.2862 | 0.2577 | 0.3324 |
| 1.5073 | 0.08 | 120 | 2.3930 | 0.2905 | 0.25 | 0.3587 |
| 1.5336 | 0.12 | 130 | 2.3619 | 0.2755 | 0.2456 | 0.3242 |
| 1.5155 | 0.13 | 140 | 2.3696 | 0.2752 | 0.243 | 0.3284 |
| 1.5124 | 0.14 | 150 | 2.3240 | 0.2788 | 0.2478 | 0.3296 |
| 1.5187 | 0.15 | 160 | 2.2883 | 0.2739 | 0.2438 | 0.3225 |
| 1.4642 | 0.16 | 170 | 2.3006 | 0.2769 | 0.2442 | 0.3307 |
| 1.6535 | 0.16 | 180 | 2.2766 | 0.2716 | 0.2435 | 0.3165 |
| 1.4924 | 0.17 | 190 | 2.3077 | 0.2782 | 0.2463 | 0.3303 |
| 1.5221 | 0.18 | 200 | 2.2725 | 0.2829 | 0.2502 | 0.3357 |
| 1.388 | 0.19 | 210 | 2.2555 | 0.2811 | 0.2502 | 0.3305 |
| 1.5172 | 0.2 | 220 | 2.2519 | 0.2758 | 0.2458 | 0.3243 |
| 1.498 | 0.21 | 230 | 2.2585 | 0.2809 | 0.2428 | 0.3437 |
| 1.447 | 0.22 | 240 | 2.2110 | 0.2745 | 0.2457 | 0.3206 |
| 1.4637 | 0.23 | 250 | 2.2511 | 0.2854 | 0.252 | 0.3395 |
| 1.5084 | 0.24 | 260 | 2.2070 | 0.2876 | 0.2547 | 0.341 |
| 1.4347 | 0.25 | 270 | 2.2558 | 0.2852 | 0.2493 | 0.344 |
| 1.3108 | 0.26 | 280 | 2.2498 | 0.2815 | 0.2488 | 0.3349 |
| 1.3894 | 0.27 | 290 | 2.2302 | 0.2794 | 0.2454 | 0.3347 |
| 1.4392 | 0.27 | 300 | 2.2021 | 0.276 | 0.2412 | 0.3327 |
| 1.3774 | 0.28 | 310 | 2.1958 | 0.2763 | 0.244 | 0.3286 |
| 1.3696 | 0.29 | 320 | 2.1657 | 0.2811 | 0.2512 | 0.3292 |
| 1.4186 | 0.3 | 330 | 2.1533 | 0.2807 | 0.251 | 0.3288 |
| 1.3073 | 0.31 | 340 | 2.1890 | 0.2805 | 0.2486 | 0.3324 |
| 1.3533 | 0.32 | 350 | 2.1639 | 0.2836 | 0.2546 | 0.3302 |
| 1.3691 | 0.33 | 360 | 2.1255 | 0.2765 | 0.25 | 0.3188 |
| 1.3716 | 0.34 | 370 | 2.1436 | 0.2818 | 0.2512 | 0.3315 |
| 1.3985 | 0.35 | 380 | 2.1160 | 0.2859 | 0.2555 | 0.3351 |
| 1.2993 | 0.36 | 390 | 2.1417 | 0.2756 | 0.2442 | 0.3262 |
| 1.3453 | 0.37 | 400 | 2.1459 | 0.2875 | 0.256 | 0.3386 |
| 1.4036 | 0.37 | 410 | 2.1200 | 0.2811 | 0.2506 | 0.33 |
| 1.3784 | 0.38 | 420 | 2.0878 | 0.2885 | 0.2592 | 0.335 |
| 1.2923 | 0.39 | 430 | 2.1342 | 0.2873 | 0.2535 | 0.3423 |
| 1.3629 | 0.4 | 440 | 2.0887 | 0.2848 | 0.2546 | 0.3328 |
| 1.4151 | 0.41 | 450 | 2.0905 | 0.2857 | 0.254 | 0.3365 |
| 1.3679 | 0.42 | 460 | 2.0829 | 0.285 | 0.2525 | 0.3375 |
| 1.339 | 0.43 | 470 | 2.0786 | 0.2791 | 0.2452 | 0.334 |
| 1.4258 | 0.44 | 480 | 2.0726 | 0.2877 | 0.2603 | 0.3317 |
| 1.4056 | 0.45 | 490 | 2.0995 | 0.2891 | 0.2556 | 0.3428 |
| 1.3548 | 0.46 | 500 | 2.0637 | 0.2757 | 0.2481 | 0.3199 |
| 1.3253 | 0.47 | 510 | 2.0638 | 0.2794 | 0.25 | 0.3266 |
| 1.264 | 0.48 | 520 | 2.0609 | 0.2861 | 0.2587 | 0.3296 |
| 1.3307 | 0.48 | 530 | 2.0396 | 0.2823 | 0.2559 | 0.3243 |
| 1.3536 | 0.49 | 540 | 2.0464 | 0.2824 | 0.2532 | 0.3288 |
| 1.2592 | 0.5 | 550 | 2.0480 | 0.2876 | 0.2592 | 0.3333 |
| 1.358 | 0.51 | 560 | 2.0432 | 0.2818 | 0.2528 | 0.3284 |
| 1.3227 | 0.52 | 570 | 2.0560 | 0.2831 | 0.2502 | 0.3365 |
| 1.3189 | 0.53 | 580 | 2.0311 | 0.2823 | 0.251 | 0.3321 |
| 1.3367 | 0.54 | 590 | 2.0498 | 0.285 | 0.2538 | 0.335 |
| 1.3473 | 0.55 | 600 | 2.0690 | 0.2773 | 0.2452 | 0.3292 |
| 1.2846 | 0.56 | 610 | 2.0555 | 0.2796 | 0.2473 | 0.3321 |
| 1.3066 | 0.57 | 620 | 2.0684 | 0.2799 | 0.245 | 0.3366 |
| 1.3193 | 0.58 | 630 | 2.0467 | 0.2852 | 0.2536 | 0.336 |
| 1.269 | 0.59 | 640 | 2.0381 | 0.2859 | 0.2561 | 0.3337 |
| 1.2906 | 0.59 | 650 | 2.0191 | 0.2831 | 0.2514 | 0.3338 |
| 1.2981 | 0.6 | 660 | 2.0184 | 0.2783 | 0.249 | 0.3251 |
| 1.2888 | 0.61 | 670 | 2.0295 | 0.2827 | 0.2515 | 0.3331 |
| 1.3179 | 0.62 | 680 | 2.0121 | 0.2885 | 0.2611 | 0.333 |
| 1.3313 | 0.63 | 690 | 2.0296 | 0.2739 | 0.2427 | 0.3245 |
| 1.1749 | 0.64 | 700 | 2.0419 | 0.2809 | 0.2507 | 0.3298 |
| 1.3023 | 0.65 | 710 | 2.0275 | 0.2838 | 0.2504 | 0.3379 |
| 1.262 | 0.66 | 720 | 1.9974 | 0.286 | 0.2539 | 0.3378 |
| 1.2906 | 0.67 | 730 | 1.9839 | 0.2839 | 0.252 | 0.3357 |
| 1.24 | 0.68 | 740 | 2.0041 | 0.286 | 0.2528 | 0.3401 |
| 1.239 | 0.69 | 750 | 2.0116 | 0.2789 | 0.2455 | 0.3326 |
| 1.1972 | 0.69 | 760 | 2.0293 | 0.2861 | 0.2536 | 0.3385 |
| 1.2114 | 0.7 | 770 | 2.0271 | 0.2738 | 0.2436 | 0.322 |
| 1.2711 | 0.71 | 780 | 2.0084 | 0.2881 | 0.2548 | 0.3417 |
| 1.262 | 0.72 | 790 | 1.9984 | 0.2806 | 0.2488 | 0.3322 |
| 1.2616 | 0.73 | 800 | 1.9715 | 0.2856 | 0.2541 | 0.3364 |
| 1.2765 | 0.74 | 810 | 1.9718 | 0.2825 | 0.2494 | 0.3356 |
| 1.2151 | 0.75 | 820 | 1.9947 | 0.2857 | 0.2513 | 0.341 |
| 1.3165 | 0.76 | 830 | 1.9854 | 0.2863 | 0.2524 | 0.3411 |
| 1.2704 | 0.77 | 840 | 1.9858 | 0.2903 | 0.2569 | 0.3443 |
| 1.3032 | 0.78 | 850 | 1.9774 | 0.2926 | 0.2583 | 0.3481 |
| 1.2461 | 0.79 | 860 | 1.9596 | 0.2847 | 0.2556 | 0.3314 |
| 1.2288 | 0.8 | 870 | 1.9873 | 0.2868 | 0.2547 | 0.339 |
| 1.2278 | 0.8 | 880 | 1.9712 | 0.289 | 0.2546 | 0.3455 |
| 1.2119 | 0.81 | 890 | 1.9862 | 0.2822 | 0.2478 | 0.338 |
| 1.3363 | 0.82 | 900 | 1.9555 | 0.2871 | 0.2576 | 0.3349 |
| 1.2324 | 0.83 | 910 | 1.9394 | 0.2878 | 0.2588 | 0.3339 |
| 1.2528 | 0.84 | 920 | 1.9593 | 0.2801 | 0.2498 | 0.3289 |
| 1.2572 | 0.85 | 930 | 1.9500 | 0.2825 | 0.2507 | 0.3337 |
| 1.2045 | 0.86 | 940 | 1.9586 | 0.2901 | 0.2589 | 0.3401 |
| 1.2173 | 0.87 | 950 | 1.9551 | 0.281 | 0.2487 | 0.3328 |
| 1.2315 | 0.88 | 960 | 1.9307 | 0.2842 | 0.2533 | 0.3337 |
| 1.2445 | 0.89 | 970 | 1.9362 | 0.2853 | 0.2537 | 0.336 |
| 1.2491 | 0.9 | 980 | 1.9614 | 0.2829 | 0.2482 | 0.3397 |
| 1.3081 | 0.91 | 990 | 1.9500 | 0.2857 | 0.2513 | 0.3411 |
| 1.1928 | 0.91 | 1000 | 1.9439 | 0.2826 | 0.2514 | 0.333 |
| 1.2243 | 0.92 | 1010 | 1.9074 | 0.2883 | 0.259 | 0.3346 |
| 1.2662 | 0.93 | 1020 | 1.9143 | 0.2912 | 0.2593 | 0.3422 |
| 1.2223 | 0.94 | 1030 | 1.9342 | 0.2899 | 0.2581 | 0.3408 |
| 1.2499 | 0.95 | 1040 | 1.9352 | 0.2835 | 0.2507 | 0.3366 |
| 1.3395 | 0.96 | 1050 | 1.9284 | 0.2864 | 0.2548 | 0.3375 |
| 1.1908 | 0.97 | 1060 | 1.9471 | 0.2853 | 0.2528 | 0.3376 |
| 1.2473 | 0.98 | 1070 | 1.9462 | 0.2941 | 0.2613 | 0.3472 |
| 1.2139 | 0.99 | 1080 | 1.9317 | 0.2859 | 0.2534 | 0.338 |
| 1.2534 | 1.0 | 1090 | 1.9278 | 0.2938 | 0.2594 | 0.3488 |
| 1.2204 | 1.01 | 1100 | 1.9177 | 0.2912 | 0.2596 | 0.341 |
| 1.2399 | 1.01 | 1110 | 1.9236 | 0.2903 | 0.2568 | 0.3443 |
| 1.1541 | 1.02 | 1120 | 1.9441 | 0.2889 | 0.2548 | 0.3431 |
| 1.1038 | 1.03 | 1130 | 1.9223 | 0.2925 | 0.2626 | 0.3399 |
| 1.1177 | 1.04 | 1140 | 1.9244 | 0.2881 | 0.2565 | 0.338 |
| 1.1224 | 1.05 | 1150 | 1.9324 | 0.2884 | 0.2547 | 0.3428 |
| 1.104 | 1.06 | 1160 | 1.9188 | 0.2798 | 0.2482 | 0.3304 |
| 1.175 | 1.07 | 1170 | 1.9042 | 0.2915 | 0.2618 | 0.3388 |
| 1.102 | 1.08 | 1180 | 1.9325 | 0.2853 | 0.253 | 0.3372 |
| 1.0829 | 1.09 | 1190 | 1.9503 | 0.2819 | 0.2478 | 0.3371 |
| 1.1842 | 1.1 | 1200 | 1.9360 | 0.2784 | 0.2438 | 0.3346 |
| 1.1552 | 1.11 | 1210 | 1.9055 | 0.286 | 0.254 | 0.3369 |
| 1.1266 | 1.12 | 1220 | 1.9106 | 0.286 | 0.2555 | 0.3345 |
| 1.1288 | 1.13 | 1230 | 1.9072 | 0.2865 | 0.2566 | 0.3336 |
| 1.1722 | 1.13 | 1240 | 1.9114 | 0.2856 | 0.2539 | 0.3364 |
| 1.1514 | 1.14 | 1250 | 1.9180 | 0.2906 | 0.2561 | 0.3461 |
| 1.1642 | 1.15 | 1260 | 1.9226 | 0.2918 | 0.2571 | 0.3475 |
| 1.1464 | 1.16 | 1270 | 1.9004 | 0.2819 | 0.2525 | 0.3283 |
| 1.1829 | 1.17 | 1280 | 1.9181 | 0.2935 | 0.2568 | 0.3524 |
| 1.17 | 1.18 | 1290 | 1.9031 | 0.2848 | 0.2523 | 0.3369 |
| 1.0751 | 1.19 | 1300 | 1.9334 | 0.2875 | 0.2531 | 0.3428 |
| 1.1327 | 1.2 | 1310 | 1.8966 | 0.2891 | 0.2568 | 0.3407 |
| 1.1319 | 1.21 | 1320 | 1.9076 | 0.2902 | 0.2575 | 0.3422 |
| 1.106 | 1.22 | 1330 | 1.8941 | 0.2908 | 0.259 | 0.3413 |
| 1.1721 | 1.23 | 1340 | 1.8956 | 0.2945 | 0.2609 | 0.3479 |
| 1.1964 | 1.23 | 1350 | 1.9140 | 0.2851 | 0.2513 | 0.3389 |
| 1.1195 | 1.24 | 1360 | 1.9168 | 0.2917 | 0.2561 | 0.3483 |
| 1.1352 | 1.25 | 1370 | 1.8962 | 0.286 | 0.253 | 0.3389 |
| 1.1164 | 1.26 | 1380 | 1.9050 | 0.2916 | 0.258 | 0.3453 |
| 1.1219 | 1.27 | 1390 | 1.9054 | 0.2872 | 0.2551 | 0.3386 |
| 1.1571 | 1.28 | 1400 | 1.8845 | 0.2896 | 0.2574 | 0.3402 |
| 1.2033 | 1.29 | 1410 | 1.8985 | 0.2852 | 0.2532 | 0.3362 |
| 1.1114 | 1.3 | 1420 | 1.8956 | 0.2882 | 0.2559 | 0.3395 |
| 1.1268 | 1.31 | 1430 | 1.8955 | 0.2895 | 0.2563 | 0.3424 |
| 1.1347 | 1.32 | 1440 | 1.8883 | 0.2865 | 0.2524 | 0.3412 |
| 1.0345 | 1.33 | 1450 | 1.8960 | 0.2895 | 0.2571 | 0.3412 |
| 1.1231 | 1.34 | 1460 | 1.8873 | 0.29 | 0.2575 | 0.3415 |
| 1.236 | 1.34 | 1470 | 1.8744 | 0.2898 | 0.2578 | 0.34 |
| 1.1054 | 1.35 | 1480 | 1.8867 | 0.2884 | 0.2546 | 0.3425 |
| 1.1393 | 1.36 | 1490 | 1.8907 | 0.2927 | 0.2605 | 0.344 |
| 1.1004 | 1.37 | 1500 | 1.8953 | 0.288 | 0.2543 | 0.3416 |
| 1.1482 | 1.38 | 1510 | 1.8731 | 0.288 | 0.2568 | 0.3377 |
| 1.1701 | 1.39 | 1520 | 1.8868 | 0.2866 | 0.2525 | 0.3411 |
| 1.1233 | 1.4 | 1530 | 1.8803 | 0.2882 | 0.2562 | 0.3385 |
| 1.0685 | 1.41 | 1540 | 1.8843 | 0.2935 | 0.262 | 0.3433 |
| 1.0657 | 1.42 | 1550 | 1.8748 | 0.2892 | 0.2553 | 0.3437 |
| 1.1275 | 1.43 | 1560 | 1.8804 | 0.2881 | 0.2553 | 0.3405 |
| 1.0883 | 1.44 | 1570 | 1.8803 | 0.2868 | 0.2527 | 0.3412 |
| 1.1096 | 1.45 | 1580 | 1.8862 | 0.2927 | 0.2586 | 0.3472 |
| 1.1521 | 1.45 | 1590 | 1.8724 | 0.288 | 0.2564 | 0.3379 |
| 1.142 | 1.46 | 1600 | 1.8788 | 0.2926 | 0.2593 | 0.3454 |
| 1.0451 | 1.47 | 1610 | 1.8684 | 0.2863 | 0.2571 | 0.3324 |
| 1.1294 | 1.48 | 1620 | 1.8704 | 0.2902 | 0.2569 | 0.3427 |
| 1.1671 | 1.49 | 1630 | 1.8756 | 0.2909 | 0.259 | 0.3413 |
| 1.2252 | 1.5 | 1640 | 1.8618 | 0.2937 | 0.2599 | 0.347 |
| 1.0834 | 1.51 | 1650 | 1.8776 | 0.2909 | 0.2589 | 0.3416 |
| 1.0417 | 1.52 | 1660 | 1.8658 | 0.2911 | 0.2592 | 0.342 |
| 1.1036 | 1.53 | 1670 | 1.8789 | 0.289 | 0.2553 | 0.343 |
| 1.1575 | 1.54 | 1680 | 1.8608 | 0.2927 | 0.2597 | 0.3452 |
| 1.058 | 1.55 | 1690 | 1.8804 | 0.2921 | 0.2585 | 0.3455 |
| 1.1251 | 1.55 | 1700 | 1.8682 | 0.2973 | 0.2637 | 0.3503 |
| 1.0818 | 1.56 | 1710 | 1.8800 | 0.2887 | 0.2544 | 0.3432 |
| 1.1346 | 1.57 | 1720 | 1.8577 | 0.289 | 0.2564 | 0.3404 |
| 1.1024 | 1.58 | 1730 | 1.8681 | 0.2946 | 0.2608 | 0.3482 |
| 1.0823 | 1.59 | 1740 | 1.8603 | 0.2908 | 0.2584 | 0.342 |
| 1.0562 | 1.6 | 1750 | 1.8670 | 0.2931 | 0.2584 | 0.3484 |
| 1.1128 | 1.61 | 1760 | 1.8576 | 0.2926 | 0.2603 | 0.3439 |
| 1.0769 | 1.62 | 1770 | 1.8709 | 0.2902 | 0.2568 | 0.3434 |
| 1.0422 | 1.63 | 1780 | 1.8597 | 0.2911 | 0.2587 | 0.3425 |
| 1.1912 | 1.64 | 1790 | 1.8648 | 0.2911 | 0.2574 | 0.3448 |
| 1.1349 | 1.65 | 1800 | 1.8667 | 0.2933 | 0.2606 | 0.3453 |
| 1.1195 | 1.66 | 1810 | 1.8684 | 0.2899 | 0.2568 | 0.3422 |
| 1.1186 | 1.66 | 1820 | 1.8581 | 0.2908 | 0.2579 | 0.3434 |
| 1.0795 | 1.67 | 1830 | 1.8639 | 0.2907 | 0.2561 | 0.3462 |
| 1.1394 | 1.68 | 1840 | 1.8467 | 0.2929 | 0.2602 | 0.3446 |
| 1.0743 | 1.69 | 1850 | 1.8682 | 0.291 | 0.2585 | 0.3428 |
| 1.0954 | 1.7 | 1860 | 1.8504 | 0.2928 | 0.2603 | 0.3445 |
| 1.0938 | 1.71 | 1870 | 1.8604 | 0.2916 | 0.2589 | 0.3436 |
| 1.1093 | 1.72 | 1880 | 1.8427 | 0.2897 | 0.2581 | 0.3398 |
| 1.1399 | 1.73 | 1890 | 1.8715 | 0.2891 | 0.2561 | 0.3422 |
| 1.1574 | 1.74 | 1900 | 1.8448 | 0.2893 | 0.2568 | 0.3409 |
| 1.1244 | 1.75 | 1910 | 1.8594 | 0.2927 | 0.2597 | 0.3453 |
| 1.1205 | 1.76 | 1920 | 1.8492 | 0.2922 | 0.2606 | 0.3425 |
| 1.1218 | 1.77 | 1930 | 1.8547 | 0.2906 | 0.2591 | 0.3401 |
| 1.1208 | 1.77 | 1940 | 1.8605 | 0.2924 | 0.2588 | 0.3459 |
| 1.0983 | 1.78 | 1950 | 1.8425 | 0.2933 | 0.2611 | 0.3442 |
| 1.1992 | 1.79 | 1960 | 1.8587 | 0.2907 | 0.2565 | 0.3455 |
| 1.1724 | 1.8 | 1970 | 1.8413 | 0.2909 | 0.2576 | 0.3435 |
| 1.1344 | 1.81 | 1980 | 1.8494 | 0.2904 | 0.2583 | 0.3413 |
| 1.1469 | 1.82 | 1990 | 1.8463 | 0.2911 | 0.2581 | 0.3437 |
| 1.1491 | 1.83 | 2000 | 1.8530 | 0.2905 | 0.2568 | 0.3441 |
| 1.0913 | 1.84 | 2010 | 1.8493 | 0.2913 | 0.258 | 0.3443 |
| 1.1298 | 1.85 | 2020 | 1.8465 | 0.2905 | 0.2573 | 0.3433 |
| 1.1202 | 1.86 | 2030 | 1.8488 | 0.2892 | 0.256 | 0.3419 |
| 1.1439 | 1.87 | 2040 | 1.8494 | 0.2911 | 0.2584 | 0.3428 |
| 1.0328 | 1.87 | 2050 | 1.8469 | 0.2907 | 0.2582 | 0.3423 |
| 1.1347 | 1.88 | 2060 | 1.8426 | 0.29 | 0.2576 | 0.341 |
| 1.094 | 1.89 | 2070 | 1.8480 | 0.2905 | 0.2577 | 0.3425 |
| 1.1201 | 1.9 | 2080 | 1.8542 | 0.2896 | 0.2568 | 0.3415 |
| 1.1475 | 1.91 | 2090 | 1.8520 | 0.29 | 0.2574 | 0.3416 |
| 1.0793 | 1.92 | 2100 | 1.8506 | 0.2897 | 0.2569 | 0.3414 |
| 1.0669 | 1.93 | 2110 | 1.8484 | 0.2907 | 0.2577 | 0.3426 |
| 1.1276 | 1.94 | 2120 | 1.8487 | 0.2904 | 0.2573 | 0.3427 |
| 1.0902 | 1.95 | 2130 | 1.8487 | 0.2904 | 0.2575 | 0.3423 |
| 1.1449 | 1.96 | 2140 | 1.8490 | 0.2898 | 0.2569 | 0.3419 |
| 1.1142 | 1.97 | 2150 | 1.8505 | 0.29 | 0.2569 | 0.3424 |
| 1.1475 | 1.98 | 2160 | 1.8501 | 0.2895 | 0.2561 | 0.3424 |
| 1.0663 | 1.98 | 2170 | 1.8485 | 0.2906 | 0.2571 | 0.3434 |
| 1.1454 | 1.99 | 2180 | 1.8475 | 0.2907 | 0.2573 | 0.3435 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.15.1
|
ringorsolya/Emotion_RoBERTa_hu | ringorsolya | 2024-07-02T13:21:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T13:21:48Z | ---
license: apache-2.0
---
|
mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF | mradermacher | 2024-07-02T18:15:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ja",
"base_model:tokyotech-llm/Swallow-13b-instruct-v0.1",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:22:56Z | ---
base_model: tokyotech-llm/Swallow-13b-instruct-v0.1
language:
- en
- ja
library_name: transformers
license: llama2
model_type: llama
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 7.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Swallow-13b-instruct-v0.1-i1-GGUF/resolve/main/Swallow-13b-instruct-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 10.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
mikedata/RL-PPO-LunarLanvderV2_second | mikedata | 2024-07-02T13:25:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-02T13:24:55Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.68 +/- 22.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Cris-s/Unit1-PPO-LunarLander-v2 | Cris-s | 2024-07-02T13:28:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-02T13:26:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.61 +/- 15.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
|
NikolayKozloff/Viking-13B-Q6_K-GGUF | NikolayKozloff | 2024-07-02T13:27:42Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-13B",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T13:26:13Z | ---
base_model: LumiOpen/Viking-13B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/Viking-13B-Q6_K-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-13B`](https://huggingface.co/LumiOpen/Viking-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-13B-Q6_K-GGUF --hf-file viking-13b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-13B-Q6_K-GGUF --hf-file viking-13b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-13B-Q6_K-GGUF --hf-file viking-13b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-13B-Q6_K-GGUF --hf-file viking-13b-q6_k.gguf -c 2048
``` |
balajin78/sxdl_bv360_lora | balajin78 | 2024-07-02T13:27:02Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:27:02Z | Entry not found |
mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF | mradermacher | 2024-07-02T13:56:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jamesohe/Llama3-CASAudit-8B-SOL-V01",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:27:05Z | ---
base_model: jamesohe/Llama3-CASAudit-8B-SOL-V01
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jamesohe/Llama3-CASAudit-8B-SOL-V01
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-CASAudit-8B-SOL-V01-GGUF/resolve/main/Llama3-CASAudit-8B-SOL-V01.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
harry85/LLM-text-generation | harry85 | 2024-07-02T13:49:11Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T13:27:56Z | ---
license: mit
---
Welcome to LLM text model |
qnixsynapse/Phi-3-mini-4k-instruct-Q4_K_M-GGUF | qnixsynapse | 2024-07-02T13:28:26Z | 0 | 0 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | text-generation | 2024-07-02T13:28:15Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
language:
- en
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.0
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# qnixsynapse/Phi-3-mini-4k-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-4k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo qnixsynapse/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo qnixsynapse/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo qnixsynapse/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo qnixsynapse/Phi-3-mini-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-mini-4k-instruct-q4_k_m.gguf -c 2048
```
|
habulaj/9882774336 | habulaj | 2024-07-02T13:28:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:28:27Z | Entry not found |
liminerity/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2 | liminerity | 2024-07-02T14:27:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:29:03Z | Entry not found |
truvideo/T5-Corrector | truvideo | 2024-07-02T20:41:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-02T13:29:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: t5-small
model-index:
- name: T5-Corrector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-Corrector
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.6937 | 5.0 | 100 | 2.5732 |
| 0.8759 | 10.0 | 200 | 0.5970 |
| 0.23 | 15.0 | 300 | 0.1588 |
| 0.12 | 20.0 | 400 | 0.1008 |
| 0.0766 | 25.0 | 500 | 0.0635 |
| 0.0559 | 30.0 | 600 | 0.0439 |
| 0.042 | 35.0 | 700 | 0.0344 |
| 0.0308 | 40.0 | 800 | 0.0283 |
| 0.0283 | 45.0 | 900 | 0.0237 |
| 0.0206 | 50.0 | 1000 | 0.0210 |
| 0.0159 | 55.0 | 1100 | 0.0185 |
| 0.0142 | 60.0 | 1200 | 0.0164 |
| 0.0123 | 65.0 | 1300 | 0.0145 |
| 0.0115 | 70.0 | 1400 | 0.0137 |
| 0.0107 | 75.0 | 1500 | 0.0125 |
| 0.0095 | 80.0 | 1600 | 0.0121 |
| 0.0071 | 85.0 | 1700 | 0.0115 |
| 0.011 | 90.0 | 1800 | 0.0115 |
| 0.0082 | 95.0 | 1900 | 0.0112 |
| 0.0081 | 100.0 | 2000 | 0.0112 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
Kokohahakuku/results | Kokohahakuku | 2024-07-02T13:29:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:29:17Z | Entry not found |
jose-bustamante/my_awesome_model | jose-bustamante | 2024-07-02T14:47:56Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T13:29:28Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2328
- Accuracy: 0.9320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.22 | 1.0 | 1563 | 0.2134 | 0.9181 |
| 0.1404 | 2.0 | 3126 | 0.2328 | 0.9320 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
nm-testing/TinyLlama-1.1B-compressed-tensors-kv-cache-scheme | nm-testing | 2024-07-02T15:28:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:31:25Z | Entry not found |
MarkBW/sarang-xl | MarkBW | 2024-07-02T13:31:47Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-07-02T13:31:37Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
sarang, <lora:sarang-XL:1.1>a woman wearing Cardigan and relaxed-fit jeans,
busy street, smiling
parameters:
negative_prompt: Bad quality, black and white, dark, 3d, people, caucasian
output:
url: images/00068-92462926.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: sarang
---
# sarang-xl
<Gallery />
## Model description
Creates the likeness of a K-Pop singer. Use your favorite realistic model and keep your weight between 0.8 and 1.0
## Trigger words
You should use `sarang` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MarkBW/sarang-xl/tree/main) them in the Files & versions tab.
|
habulaj/260396231182 | habulaj | 2024-07-02T13:31:43Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:31:40Z | Entry not found |
moli2211/phi-3-lora | moli2211 | 2024-07-02T13:42:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T13:33:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zaanind/mt5si | zaanind | 2024-07-02T17:03:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-02T13:33:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/Viking-13B-Q5_K_M-GGUF | NikolayKozloff | 2024-07-02T13:38:32Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-13B",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T13:34:58Z | ---
base_model: LumiOpen/Viking-13B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/Viking-13B-Q5_K_M-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-13B`](https://huggingface.co/LumiOpen/Viking-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-13B-Q5_K_M-GGUF --hf-file viking-13b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-13B-Q5_K_M-GGUF --hf-file viking-13b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-13B-Q5_K_M-GGUF --hf-file viking-13b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-13B-Q5_K_M-GGUF --hf-file viking-13b-q5_k_m.gguf -c 2048
``` |
stojchet/08ef8b4ed3918a09aeafed5d08385634 | stojchet | 2024-07-02T21:55:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"region:us"
] | null | 2024-07-02T13:38:03Z | ---
base_model: deepseek-ai/deepseek-coder-1.3b-base
datasets:
- generator
library_name: peft
license: other
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: 08ef8b4ed3918a09aeafed5d08385634
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/f4khasya)
# 08ef8b4ed3918a09aeafed5d08385634
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0354 | 1.0 | 1 | 1.2408 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.43.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1 |
Brunomaiaagustini1991/A | Brunomaiaagustini1991 | 2024-07-02T13:38:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T13:38:08Z | ---
license: apache-2.0
---
|
manbeast3b/ZZZZZZZZdriver134 | manbeast3b | 2024-07-02T13:40:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T13:38:25Z | Entry not found |
ledigajobb/unified_skill_ner_echo | ledigajobb | 2024-07-02T13:38:55Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T13:38:27Z | ---
tags:
- generated_from_trainer
model-index:
- name: unified_skill_ner_echo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unified_skill_ner_echo
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.34.0
- Pytorch 2.3.0+cu121
- Datasets 2.14.7
- Tokenizers 0.14.1
|
InderV94/type_inference_TD | InderV94 | 2024-07-02T13:38:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:38:47Z | ---
base_model: unsloth/gemma-2b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
---
# Uploaded model
- **Developed by:** InderV94
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ferrazzipietro/Llama-2-7b-chat-hfspecialTkn_en.layer1_NoQuant_16_16_0.02_8 | ferrazzipietro | 2024-07-02T13:39:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:39:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmad543/horror_bandersnatch | ahmad543 | 2024-07-02T13:41:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:39:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lazed-outt-for-real24/jm | lazed-outt-for-real24 | 2024-07-02T13:41:17Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-02T13:39:49Z | ---
license: openrail
---
|
impossibleexchange/insertsomethingwitty | impossibleexchange | 2024-07-02T13:39:57Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T13:39:57Z | ---
license: mit
---
|
Brunomaiaagustini1991/Abc | Brunomaiaagustini1991 | 2024-07-02T13:40:00Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | 2024-07-02T13:40:00Z | ---
license: artistic-2.0
---
|
ProfEngel/seradeGM0.1_Mistral0.3 | ProfEngel | 2024-07-02T18:50:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-07-02T13:40:44Z | ---
license: apache-2.0
---
|
stablediffusionapi/landscape-realistic-pro | stablediffusionapi | 2024-07-02T13:53:36Z | 0 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-07-02T13:40:55Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "landscape-realistic-pro"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/landscape-realistic-pro)
Model link: [View model](https://modelslab.com/models/landscape-realistic-pro)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "landscape-realistic-pro",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
pastel1010/texp-2000 | pastel1010 | 2024-07-02T14:37:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:41:07Z | Entry not found |
Detsutut/Igea-1B-qa-lora | Detsutut | 2024-07-02T13:41:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:Detsutut/Igea-1B-v0.0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:41:40Z | ---
base_model: Detsutut/Igea-1B-v0.0.1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** Detsutut
- **License:** apache-2.0
- **Finetuned from model :** Detsutut/Igea-1B-v0.0.1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
irfansayed/firstmodel | irfansayed | 2024-07-02T13:42:20Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T13:42:20Z | ---
license: mit
---
|
styalai/XT-60M-v0.1 | styalai | 2024-07-02T15:28:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:42:28Z | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
ledigajobb/unified_en_ner_echo | ledigajobb | 2024-07-02T13:44:26Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T13:43:19Z | ---
tags:
- generated_from_trainer
model-index:
- name: unified_en_ner_echo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unified_en_ner_echo
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 123
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.34.0
- Pytorch 2.3.0+cu121
- Datasets 2.14.7
- Tokenizers 0.14.1
|
yemen2016/memobert3_NC_01 | yemen2016 | 2024-07-02T13:53:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:MiMe-MeMo/MeMo-BERT-03",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T13:44:11Z | ---
base_model: MiMe-MeMo/MeMo-BERT-03
tags:
- generated_from_trainer
model-index:
- name: memobert3_NC_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# memobert3_NC_01
This model is a fine-tuned version of [MiMe-MeMo/MeMo-BERT-03](https://huggingface.co/MiMe-MeMo/MeMo-BERT-03) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6561
- F1-score: 0.6455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 37 | 0.6982 | 0.4313 |
| No log | 2.0 | 74 | 0.6977 | 0.3910 |
| No log | 3.0 | 111 | 0.6812 | 0.5938 |
| No log | 4.0 | 148 | 0.6833 | 0.5853 |
| No log | 5.0 | 185 | 0.6705 | 0.5887 |
| No log | 6.0 | 222 | 0.6679 | 0.5607 |
| No log | 7.0 | 259 | 0.6589 | 0.5887 |
| No log | 8.0 | 296 | 0.6583 | 0.6061 |
| No log | 9.0 | 333 | 0.6561 | 0.6455 |
| No log | 10.0 | 370 | 0.6590 | 0.6455 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Davlan/naija-bert-base | Davlan | 2024-07-02T14:05:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"en",
"yo",
"ig",
"ha",
"pcm",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-02T13:44:22Z | ---
license: apache-2.0
language:
- en
- yo
- ig
- ha
- pcm
--- |
albrigom/detr-resnet-50-hardhat-finetuned | albrigom | 2024-07-02T17:45:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-07-02T13:44:37Z | Entry not found |
yhavinga/Boreas-Qwen2-7B-chat-sft-dpo | yhavinga | 2024-07-02T13:44:56Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:44:56Z | Entry not found |
NikolayKozloff/Viking-13B-Q5_K_S-GGUF | NikolayKozloff | 2024-07-02T13:47:31Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-13B",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T13:45:53Z | ---
base_model: LumiOpen/Viking-13B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/Viking-13B-Q5_K_S-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-13B`](https://huggingface.co/LumiOpen/Viking-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-13B-Q5_K_S-GGUF --hf-file viking-13b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-13B-Q5_K_S-GGUF --hf-file viking-13b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-13B-Q5_K_S-GGUF --hf-file viking-13b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-13B-Q5_K_S-GGUF --hf-file viking-13b-q5_k_s.gguf -c 2048
``` |
bad49wolf/mistral-v3.0-darija-base-lora | bad49wolf | 2024-07-02T13:47:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:45:59Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** bad49wolf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
woweenie/pretty-girls-next-door-sd3-lora-v1 | woweenie | 2024-07-02T13:57:46Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"simpletuner",
"lora",
"template:sd-lora",
"not-for-all-audiences",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-07-02T13:46:02Z | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-3-medium-diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- simpletuner
- lora
- template:sd-lora
- not-for-all-audiences
inference: true
widget:
- text: unconditional (blank prompt)
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_0_0.png
- text: a photo of a naked woman with large breasts
parameters:
negative_prompt: blurry, cropped, ugly
output:
url: ./assets/image_1_0.png
---
# sdxl-training
This is a LoRA derived from [stabilityai/stable-diffusion-3-medium-diffusers](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers).
The main validation prompt used during training was:
```
a naked woman, front view, standing in a room. large breasts, pretty face, pussy. photo, RAW candid cinema, 16mm, color graded portra 400 film, remarkable color, ultra realistic, dry skin, shot with cinematic camera
```
Negative prompt:
```
ugly 3d render, deformed corpse, brushstrokes, painting
```
## Validation settings
- CFG: `4.0`
- CFG Rescale: `0.0`
- Steps: `40`
- Sampler: `euler`
- Seed: `6`
- Resolution: `816x1280`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 1072
- Training steps: 21450
- Learning rate: 0.0002
- Effective batch size: 20
- Micro-batch size: 5
- Gradient accumulation steps: 4
- Number of GPUs: 1
- Prediction type: epsilon
- Rescaled betas zero SNR: False
- Optimizer: AdamW, stochastic bf16
- Precision: Pure BF16
- Xformers: Enabled
- LoRA Rank: 64
- LoRA Alpha: 64.0
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### curated3
- Repeats: 0
- Total number of images: 400
- Total number of aspect buckets: 1
- Resolution: 0.5 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
|
buetnlpbio/birna-tokenizer | buetnlpbio | 2024-07-02T13:46:58Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:46:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/quantumaikr_-_llama-2-70b-fb16-korean-gguf | RichardErkhov | 2024-07-03T01:05:08Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T13:46:58Z | Entry not found |
mradermacher/Yi-9B-200K-i1-GGUF | mradermacher | 2024-07-02T17:24:00Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:01-ai/Yi-9B-200K",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:47:12Z | ---
base_model: 01-ai/Yi-9B-200K
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/01-ai/Yi-9B-200K
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Yi-9B-200K-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ3_M.gguf) | i1-IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q4_0.gguf) | i1-Q4_0 | 5.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-9B-200K-i1-GGUF/resolve/main/Yi-9B-200K.i1-Q6_K.gguf) | i1-Q6_K | 7.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
ABHIiiii1/FineTuned-Trans-oneTomany-llama-2-7b | ABHIiiii1 | 2024-07-02T13:54:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:47:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
twololing1/TestingChip | twololing1 | 2024-07-02T13:50:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T13:47:27Z | ---
license: unknown
---
|
COPA/WL-url-text-class-electra | COPA | 2024-07-02T13:48:54Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"electra",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-07-02T13:48:35Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# COPA/WL-url-text-class-electra
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("COPA/WL-url-text-class-electra")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
yiyic/mt5_me5_atlatic_fami_32_2layers_inverter | yiyic | 2024-07-02T13:50:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:50:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
habulaj/4867638693 | habulaj | 2024-07-02T13:50:07Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:50:04Z | Entry not found |
nanotron/bench_cluster | nanotron | 2024-07-03T01:22:37Z | 0 | 1 | null | [
"region:us"
] | null | 2024-07-02T13:50:04Z | Entry not found |
habulaj/5700643453 | habulaj | 2024-07-02T13:50:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:50:16Z | Entry not found |
mradermacher/Matter-0.2-8x22B-i1-GGUF | mradermacher | 2024-07-02T23:00:21Z | 0 | 0 | transformers | [
"transformers",
"en",
"dataset:0-hero/Matter-0.2-alpha-Slim-A",
"base_model:0-hero/Matter-0.2-8x22B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:52:04Z | ---
base_model: 0-hero/Matter-0.2-8x22B
datasets:
- 0-hero/Matter-0.2-alpha-Slim-A
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/0-hero/Matter-0.2-8x22B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Matter-0.2-8x22B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/Matter-0.2-8x22B-i1-GGUF/resolve/main/Matter-0.2-8x22B.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Matter-0.2-8x22B-i1-GGUF/resolve/main/Matter-0.2-8x22B.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 52.2 | IQ3_XXS probably better |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
Detsutut/Igea-1B-qa-GGUF-Q8_0 | Detsutut | 2024-07-02T13:53:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:Detsutut/Igea-1B-v0.0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:52:20Z | ---
base_model: Detsutut/Igea-1B-v0.0.1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Detsutut
- **License:** apache-2.0
- **Finetuned from model :** Detsutut/Igea-1B-v0.0.1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YassirFr/lora_model | YassirFr | 2024-07-02T13:53:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:52:32Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** YassirFr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jose-bustamante/my-awesome-model | jose-bustamante | 2024-07-02T13:53:02Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:53:02Z | Entry not found |
Detsutut/Igea-1B-qa-GGUF-Q16 | Detsutut | 2024-07-02T13:55:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:Detsutut/Igea-1B-v0.0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:53:47Z | ---
base_model: Detsutut/Igea-1B-v0.0.1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Detsutut
- **License:** apache-2.0
- **Finetuned from model :** Detsutut/Igea-1B-v0.0.1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Wenboz/phi-3-dpo-noise-0.4 | Wenboz | 2024-07-02T17:16:17Z | 0 | 0 | transformers | [
"transformers",
"phi3",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T13:54:17Z | Entry not found |
bigbossmonster/model | bigbossmonster | 2024-07-02T14:32:09Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T13:54:23Z | Entry not found |
MeCaTo/1 | MeCaTo | 2024-07-02T13:54:29Z | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | 2024-07-02T13:54:29Z | ---
license: afl-3.0
---
|
NikolayKozloff/Viking-13B-Q4_K_M-GGUF | NikolayKozloff | 2024-07-02T13:56:12Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-13B",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T13:54:33Z | ---
base_model: LumiOpen/Viking-13B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/Viking-13B-Q4_K_M-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-13B`](https://huggingface.co/LumiOpen/Viking-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-13B-Q4_K_M-GGUF --hf-file viking-13b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-13B-Q4_K_M-GGUF --hf-file viking-13b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-13B-Q4_K_M-GGUF --hf-file viking-13b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-13B-Q4_K_M-GGUF --hf-file viking-13b-q4_k_m.gguf -c 2048
``` |
AngeloFasciani/finetuning-sentiment-model-3000-samples | AngeloFasciani | 2024-07-02T14:23:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T13:55:26Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3438
- Accuracy: 0.8633
- F1: 0.8682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.