modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
ribesstefano/RuleBert-v0.4-k4
|
ribesstefano
| 2024-01-07T23:12:30Z
| 92
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"Italian",
"legal ruling",
"generated_from_trainer",
"base_model:papluca/xlm-roberta-base-language-detection",
"base_model:finetune:papluca/xlm-roberta-base-language-detection",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T23:04:16Z
|
---
license: mit
base_model: papluca/xlm-roberta-base-language-detection
tags:
- Italian
- legal ruling
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ribesstefano/RuleBert-v0.4-k4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ribesstefano/RuleBert-v0.4-k4
This model is a fine-tuned version of [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3517
- F1: 0.5190
- Roc Auc: 0.6864
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3447 | 0.12 | 250 | 0.3402 | 0.4810 | 0.6688 | 0.0 |
| 0.3501 | 0.24 | 500 | 0.3548 | 0.4884 | 0.6786 | 0.0 |
| 0.3433 | 0.36 | 750 | 0.3596 | 0.4946 | 0.6885 | 0.0 |
| 0.3521 | 0.48 | 1000 | 0.3762 | 0.4861 | 0.6648 | 0.0 |
| 0.3466 | 0.6 | 1250 | 0.3496 | 0.4861 | 0.6648 | 0.0 |
| 0.3285 | 0.72 | 1500 | 0.3519 | 0.4861 | 0.6648 | 0.0 |
| 0.333 | 0.84 | 1750 | 0.3550 | 0.4861 | 0.6648 | 0.0 |
| 0.3268 | 0.96 | 2000 | 0.3436 | 0.5190 | 0.6864 | 0.0 |
| 0.3376 | 1.08 | 2250 | 0.3637 | 0.4978 | 0.6891 | 0.0 |
| 0.3319 | 1.19 | 2500 | 0.3459 | 0.5190 | 0.6864 | 0.0 |
| 0.3169 | 1.31 | 2750 | 0.3430 | 0.4810 | 0.6688 | 0.0 |
| 0.3293 | 1.43 | 3000 | 0.3480 | 0.4861 | 0.6648 | 0.0 |
| 0.3293 | 1.55 | 3250 | 0.3517 | 0.5190 | 0.6864 | 0.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ntc-ai/SDXL-LoRA-slider.double-exposure
|
ntc-ai
| 2024-01-07T23:11:10Z
| 41
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-07T23:11:07Z
|
---
language:
- en
thumbnail: "images/evaluate/double exposure.../double exposure_17_3.0.png"
widget:
- text: double exposure
output:
url: images/double exposure_17_3.0.png
- text: double exposure
output:
url: images/double exposure_19_3.0.png
- text: double exposure
output:
url: images/double exposure_20_3.0.png
- text: double exposure
output:
url: images/double exposure_21_3.0.png
- text: double exposure
output:
url: images/double exposure_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "double exposure"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - double exposure (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/double exposure_17_-3.0.png" width=256 height=256 /> | <img src="images/double exposure_17_0.0.png" width=256 height=256 /> | <img src="images/double exposure_17_3.0.png" width=256 height=256 /> |
| <img src="images/double exposure_19_-3.0.png" width=256 height=256 /> | <img src="images/double exposure_19_0.0.png" width=256 height=256 /> | <img src="images/double exposure_19_3.0.png" width=256 height=256 /> |
| <img src="images/double exposure_20_-3.0.png" width=256 height=256 /> | <img src="images/double exposure_20_0.0.png" width=256 height=256 /> | <img src="images/double exposure_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
double exposure
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.double-exposure', weight_name='double exposure.safetensors', adapter_name="double exposure")
# Activate the LoRA
pipe.set_adapters(["double exposure"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, double exposure"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 920+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
ribesstefano/RuleBert-v0.4-k3
|
ribesstefano
| 2024-01-07T23:02:26Z
| 92
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"Italian",
"legal ruling",
"generated_from_trainer",
"base_model:papluca/xlm-roberta-base-language-detection",
"base_model:finetune:papluca/xlm-roberta-base-language-detection",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T22:57:47Z
|
---
license: mit
base_model: papluca/xlm-roberta-base-language-detection
tags:
- Italian
- legal ruling
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ribesstefano/RuleBert-v0.4-k3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ribesstefano/RuleBert-v0.4-k3
This model is a fine-tuned version of [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3407
- F1: 0.4872
- Roc Auc: 0.6726
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3575 | 0.12 | 250 | 0.3463 | 0.5176 | 0.6948 | 0.0 |
| 0.347 | 0.24 | 500 | 0.3424 | 0.4507 | 0.6503 | 0.0714 |
| 0.347 | 0.36 | 750 | 0.3390 | 0.4507 | 0.6503 | 0.0714 |
| 0.3398 | 0.48 | 1000 | 0.3248 | 0.4872 | 0.6726 | 0.0 |
| 0.3485 | 0.6 | 1250 | 0.3322 | 0.5000 | 0.6785 | 0.0 |
| 0.3355 | 0.71 | 1500 | 0.3407 | 0.4872 | 0.6726 | 0.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tsunemoto/LiteLlama-460M-1T-GGUF
|
tsunemoto
| 2024-01-07T22:43:22Z
| 98
| 6
| null |
[
"gguf",
"GGUF",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-01-07T22:41:51Z
|
---
title: "LiteLlama-460M-1T Quantized in GGUF"
tags:
- GGUF
language: en
---

# Tsunemoto GGUF's of LiteLlama-460M-1T
This is a GGUF quantization of LiteLlama-460M-1T.
## Original Repo Link:
[Original Repository](https://huggingface.co/ahxt/LiteLlama-460M-1T)
## Original Model Card:
---
# LiteLlama: Reduced-Scale Llama
In this series of repos, we present an open-source reproduction of Meta AI's [LLaMa 2](https://ai.meta.com/llama/). However, with significantly reduced model sizes, [LiteLlama-460M-1T](https://huggingface.co/ahxt/LiteLlama-460M-1T) has 460M parameters trained with 1T tokens.
## Dataset and Tokenization
We train our models on part of [RedPajama](https://www.together.xyz/blog/redpajama) dataset. We use the [GPT2Tokenizer](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/gpt2#transformers.GPT2Tokenizer) to tokenize the text.
## Training Details
The model was trained with ~1T tokens (0.98T). num of tokens = steps*length*batch_size=499679*1024*192=98240888832≈0.98T.
The training curve is at this [WandB project](https://wandb.ai/ahxt/llama2_xs_460M_training_loss/reports/reduced_train_loss-23-09-05-20-25-43---Vmlldzo1MzIwNDUx?accessToken=x2ch3n30jo77p1x8y7q9js4h4d8zpjtz1tzot4xxullyefixp4jwt7au2q37k2q6).
### Using with HuggingFace Transformers
The experimental checkpoints can be directly loaded by [Transformers](https://huggingface.co/transformers/) library. The following code snippet shows how to load the our experimental model and generate text with it.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = 'ahxt/LiteLlama-460M-1T'
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
prompt = 'Q: What is the largest bird?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
tokens = model.generate(input_ids, max_length=20)
print( tokenizer.decode(tokens[0].tolist(), skip_special_tokens=True) )
# Q: What is the largest bird?\nA: The largest bird is a black-headed gull.
```
## Evaluation
### We evaluate our models on the MMLU task.
| Models | #parameters |zero-shot | 5-shot |
| --- | --- | --- | --- |
| llama | 7B | 28.46 | 35.05 |
| openllama | 3B | 24.90 | 26.71 |
|TinyLlama-1.1B-step-50K-105b | 1.1B | 19.00 | 26.53 |
| LiteLlama-460M-1T | 0.46B | 21.13 | 26.39 |
### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ahxt__llama2_xs_460M_experimental)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 26.65 |
| ARC (25-shot) | 24.91 |
| HellaSwag (10-shot) | 38.47 |
| MMLU (5-shot) | 26.17 |
| TruthfulQA (0-shot) | 41.59 |
| Winogrande (5-shot) | 49.88 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 5.51 |
## Contact
This model is developed by [Xiaotian Han](https://ahxt.github.io/) from Texas A&M University and released under MIT License.
|
Jorsini/distilroberta-base-finetuned-media-center
|
Jorsini
| 2024-01-07T22:39:44Z
| 173
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-07T17:00:15Z
|
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-media-center
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-media-center
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 38 | 1.6960 |
| No log | 2.0 | 76 | 1.5674 |
| No log | 3.0 | 114 | 1.5541 |
| No log | 4.0 | 152 | 1.5729 |
| No log | 5.0 | 190 | 1.5164 |
| No log | 6.0 | 228 | 1.5343 |
| No log | 7.0 | 266 | 1.5232 |
| No log | 8.0 | 304 | 1.4354 |
| No log | 9.0 | 342 | 1.4203 |
| No log | 10.0 | 380 | 1.3526 |
| No log | 11.0 | 418 | 1.3911 |
| No log | 12.0 | 456 | 1.4522 |
| No log | 13.0 | 494 | 1.4324 |
| 1.4666 | 14.0 | 532 | 1.3606 |
| 1.4666 | 15.0 | 570 | 1.3669 |
| 1.4666 | 16.0 | 608 | 1.3420 |
| 1.4666 | 17.0 | 646 | 1.3544 |
| 1.4666 | 18.0 | 684 | 1.3346 |
| 1.4666 | 19.0 | 722 | 1.3747 |
| 1.4666 | 20.0 | 760 | 1.3850 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
articblue/ppo-Pyramids
|
articblue
| 2024-01-07T22:38:42Z
| 1
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-01-07T22:38:40Z
|
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: articblue/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
christopheparisse/complexity_80_model
|
christopheparisse
| 2024-01-07T22:37:21Z
| 48
| 0
|
transformers
|
[
"transformers",
"tf",
"flaubert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-03T19:20:53Z
|
---
tags:
- generated_from_keras_callback
model-index:
- name: complexity_80_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# complexity_80_model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.36.0
- TensorFlow 2.13.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
johnatanebonilla/es_pipeline
|
johnatanebonilla
| 2024-01-07T22:36:04Z
| 1
| 0
|
spacy
|
[
"spacy",
"es",
"doi:10.57967/hf/2998",
"model-index",
"region:us"
] | null | 2024-01-07T18:13:11Z
|
---
tags:
- spacy
language:
- es
model-index:
- name: es_pipeline
results:
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.8148148148
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.96
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9407407407
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.7908256881
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.6862385321
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.8545454545
---
| Feature | Description |
| --- | --- |
| **Name** | `es_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `transformer`, `parser`, `morphologizer`, `tagger` |
| **Components** | `transformer`, `parser`, `morphologizer`, `tagger` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (423 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `aux`, `case`, `cc`, `ccomp`, `conj`, `cop`, `dep`, `det`, `discourse`, `expl:pass`, `expl:pv`, `fixed`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `obl:arg`, `punct`, `reparandum`, `xcomp` |
| **`morphologizer`** | `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADP`, `POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `POS=SCONJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT\|PunctType=Comm`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Degree=Dim\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `POS=PRON\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Number=Sing\|POS=PRON\|PronType=Dem`, `NumType=Card\|Number=Plur\|POS=NUM`, `POS=PUNCT\|PunctType=Peri`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Ger`, `NumForm=Digit\|POS=NOUN`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=ADJ`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=ADV\|Polarity=Neg`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Quot`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Quot`, `POS=NOUN`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `POS=PROPN`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PUNCT\|PunctType=Colo`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Degree=Cmp\|POS=ADV`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Neg`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Int`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `POS=X`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Degree=Dim\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Semi`, `Degree=Dim\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=NOUN`, `POS=VERB`, `POS=PRON\|PronType=Exc`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `POS=PRON`, `POS=ADJ`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc,Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `POS=AUX\|VerbForm=Inf`, `Degree=Abs\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Degree=Dim\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Degree=Dim\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Degree=Dim\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `NumType=Card\|Number=Sing\|POS=NUM`, `Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PUNCT\|PunctType=Quot`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `POS=ADV\|PronType=Int`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `POS=PRON\|PronType=Int,Rel`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `POS=SCONJ\|PronType=Int`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres`, `POS=PRON\|PronType=Ind`, `POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=DET`, `Number=Plur\|POS=PRON\|PronType=Dem`, `NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=X`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs` |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `VERB`, `X`, `ac0fs0`, `ac0ms0`, `ad0ms0`, `ao0fs0`, `ao0ms0`, `ap1fs0`, `ap1ms0`, `aq0cn0`, `aq0cp0`, `aq0cs0`, `aq0fp0`, `aq0fs0`, `aq0mp0`, `aq0ms0`, `aqccs0`, `aqdms0`, `az0000`, `cc`, `cs`, `da0fp0`, `da0fs0`, `da0mp0`, `da0ms0`, `da0ns0`, `dd0fp0`, `dd0fs0`, `dd0mp0`, `dd0ms0`, `de0cn0`, `di0cn0`, `di0cp0`, `di0cs0`, `di0fp0`, `di0fs0`, `di0mp0`, `di0ms0`, `dp1cps`, `dp1css`, `dp1msp`, `dp2cps`, `dp3cp0`, `dp3cs0`, `dt0cn0`, `dt0mp0`, `faa`, `fat`, `fc`, `fd`, `fia`, `fit`, `fp`, `fs`, `fx`, `i`, `nccp000`, `nccs000`, `ncfp000`, `ncfp00d`, `ncfs000`, `ncfs00d`, `ncmc000`, `ncmn000`, `ncmp000`, `ncms000`, `ncms00d`, `np00000`, `pc0cn000`, `pc1cp000`, `pc1cs000`, `pc2cp000`, `pc2cs000`, `pc3cn000`, `pc3cna00`, `pc3cpd00`, `pc3csd00`, `pc3fpa00`, `pc3fsa00`, `pc3mpa00`, `pc3msa00`, `pd0cs000`, `pd0fs000`, `pd0mp000`, `pd0ms000`, `pd0ns000`, `pi0cs000`, `pi0fp000`, `pi0fs000`, `pi0mp000`, `pi0ms000`, `pp1csn00`, `pp1cso00`, `pp1mp000`, `pp2cp00p`, `pp2cs00p`, `pp2csn00`, `pp3fp000`, `pp3fs000`, `pp3mp000`, `pp3ms000`, `pr000000`, `pr0cn000`, `pr0cs000`, `pt000000`, `pt0cn000`, `pt0cs000`, `rg`, `rn`, `spc00`, `sps00`, `vmg0000`, `vmic1s0`, `vmic3s0`, `vmif1s0`, `vmif3s0`, `vmii1p0`, `vmii1s0`, `vmii2s0`, `vmii3p0`, `vmii3s0`, `vmip1p0`, `vmip1s0`, `vmip2p0`, `vmip2s0`, `vmip3p0`, `vmip3s0`, `vmis1p0`, `vmis1s0`, `vmis2s0`, `vmis3p0`, `vmis3s0`, `vmm02s0`, `vmm03s0`, `vmn0000`, `vmp00pf`, `vmp00pm`, `vmp00sf`, `vmp00sm`, `vmsi2s0`, `vmsi3p0`, `vmsi3s0`, `vmsp2s0`, `vmsp3s0`, `vsii1s0`, `vsii2s0`, `vsii3p0`, `vsii3s0`, `vsip2s0`, `vsip3p0`, `vsip3s0`, `vsis1p0`, `vsis1s0`, `vsis3p0`, `vsis3s0`, `vsp00sm`, `vssi3p0`, `vssp3s0` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `DEP_UAS` | 79.08 |
| `DEP_LAS` | 68.62 |
| `SENTS_P` | 82.46 |
| `SENTS_R` | 88.68 |
| `SENTS_F` | 85.45 |
| `POS_ACC` | 96.00 |
| `MORPH_ACC` | 94.07 |
| `TAG_ACC` | 81.48 |
| `TRANSFORMER_LOSS` | 702051.96 |
| `PARSER_LOSS` | 1571327.24 |
| `MORPHOLOGIZER_LOSS` | 73250.67 |
| `TAGGER_LOSS` | 51762.61 |
|
vanidalearns/mobythecorgi
|
vanidalearns
| 2024-01-07T22:30:14Z
| 0
| 1
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-07T22:26:13Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mobythecorgi Dreambooth model trained by vanidalearns with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
christopheparisse/complexity_90mChildCorpora_model
|
christopheparisse
| 2024-01-07T22:29:13Z
| 47
| 0
|
transformers
|
[
"transformers",
"tf",
"flaubert",
"text-classification",
"generated_from_keras_callback",
"base_model:flaubert/flaubert_base_cased",
"base_model:finetune:flaubert/flaubert_base_cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-02T11:34:00Z
|
---
license: mit
base_model: flaubert/flaubert_base_cased
tags:
- generated_from_keras_callback
model-index:
- name: complexity_90mChildCorpora_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# complexity_90mChildCorpora_model
This model is a fine-tuned version of [flaubert/flaubert_base_cased](https://huggingface.co/flaubert/flaubert_base_cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1001
- Validation Loss: 0.0943
- Train Accuracy: 0.9661
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 196040, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1175 | 0.1154 | 0.9661 | 0 |
| 0.1009 | 0.0963 | 0.9661 | 1 |
| 0.1001 | 0.0943 | 0.9661 | 2 |
### Framework versions
- Transformers 4.36.0
- TensorFlow 2.13.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
GDavila/sdxl-bling
|
GDavila
| 2024-01-07T22:26:11Z
| 12
| 2
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-07T22:02:16Z
|
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: 'a photo of a diamond watch, in the style of TOK'
output:
url: images/b3.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: TOK
---
# sdxl-bling
<Gallery />
## Model description
SDXL lora trained on diamond watches https://huggingface.co/datasets/GDavila/diamondWatches
## Trigger words
You should use `TOK` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/GDavila/sdxl-bling/tree/main) them in the Files & versions tab.
|
jysssacc/roberta-base_IA3_lr0.0005_bs4_epoch20_wd0.01
|
jysssacc
| 2024-01-07T22:19:05Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-07T21:56:55Z
|
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: roberta-base_IA3_lr0.0005_bs4_epoch20_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_IA3_lr0.0005_bs4_epoch20_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 19.5038 | 1.0 | 157 | 18.4857 |
| 8.9314 | 2.0 | 314 | 5.8495 |
| 5.4493 | 3.0 | 471 | 4.2791 |
| 3.8453 | 4.0 | 628 | 3.4174 |
| 3.4001 | 5.0 | 785 | 2.8767 |
| 2.8518 | 6.0 | 942 | 2.5189 |
| 2.7181 | 7.0 | 1099 | 2.2672 |
| 2.3938 | 8.0 | 1256 | 2.0897 |
| 2.2025 | 9.0 | 1413 | 1.9660 |
| 2.1035 | 10.0 | 1570 | 1.8055 |
| 1.9748 | 11.0 | 1727 | 1.6968 |
| 1.8698 | 12.0 | 1884 | 1.6367 |
| 1.7843 | 13.0 | 2041 | 1.5600 |
| 1.7277 | 14.0 | 2198 | 1.5018 |
| 1.6915 | 15.0 | 2355 | 1.4518 |
| 1.5865 | 16.0 | 2512 | 1.4089 |
| 1.5934 | 17.0 | 2669 | 1.3896 |
| 1.5713 | 18.0 | 2826 | 1.3617 |
| 1.5521 | 19.0 | 2983 | 1.3453 |
| 1.5471 | 20.0 | 3140 | 1.3406 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
rjaiswal/segmind-1B-watches-all-brands-model-lora
|
rjaiswal
| 2024-01-07T22:16:26Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:segmind/SSD-1B",
"base_model:adapter:segmind/SSD-1B",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-07T19:30:25Z
|
---
license: creativeml-openrail-m
base_model: segmind/SSD-1B
dataset: rjaiswal/watches_all_brands
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - rjaiswal/segmind-1B-watches-all-brands-model-lora
These are LoRA adaption weights for segmind/SSD-1B. The weights were fine-tuned on the rjaiswal/watches_all_brands dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
stablediffusionapi/ae-t-pagepal
|
stablediffusionapi
| 2024-01-07T22:14:44Z
| 32
| 0
|
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-01-07T22:12:53Z
|
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# AE-t-pagepal API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "ae-t-pagepal"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/ae-t-pagepal)
Model link: [View model](https://modelslab.com/models/ae-t-pagepal)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "ae-t-pagepal",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Felladrin/onnx-TinyMistral-248M
|
Felladrin
| 2024-01-07T22:07:16Z
| 18
| 7
|
transformers.js
|
[
"transformers.js",
"onnx",
"mistral",
"text-generation",
"conversational",
"base_model:Locutusque/TinyMistral-248M",
"base_model:quantized:Locutusque/TinyMistral-248M",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-11-16T07:20:42Z
|
---
license: apache-2.0
library_name: "transformers.js"
base_model: Locutusque/TinyMistral-248M
---
INT8 ONNX version of [Locutusque/TinyMistral-248M](https://huggingface.co/Locutusque/TinyMistral-248M) to use with [Transformers.js](https://huggingface.co/docs/transformers.js).
### Example usage
#### Pipeline API
```js
import { pipeline } from '@xenova/transformers';
const generator = await pipeline('text-generation', 'Felladrin/onnx-int8-TinyMistral-248M');
const output = await generator('Once upon a time,', { add_special_tokens: true, max_new_tokens: 60, repetition_penalty: 1.2});
console.log(output);
// 'Once upon a time, the world was in turmoil. The United States had been on an unprecedented hiatus since 1970 and it seemed that America’s role as a global powerhouse would be at risk if we were to continue with our current political system.\n\nThe US has become'
```
#### Auto Classes
```js
import { AutoModelForCausalLM, AutoTokenizer } from '@xenova/transformers';
const model_path = 'Felladrin/onnx-int8-TinyMistral-248M';
const model = await AutoModelForCausalLM.from_pretrained(model_path);
const tokenizer = await AutoTokenizer.from_pretrained(model_path);
const prompt = 'Once upon a time,';
const { input_ids } = tokenizer(prompt);
const tokens = await model.generate(input_ids, { max_new_tokens: 60, repetition_penalty: 1.2});
console.log(tokenizer.decode(tokens[0], { skip_special_tokens: true }));
// 'Once upon a time, the world was in turmoil. The United States had been on an unprecedented hiatus since 1970 and it seemed that America’s role as a global powerhouse would be at risk if we were to continue with our current political system.\n\nThe US has become'
```
|
ribesstefano/RuleBert-v0.3-k2
|
ribesstefano
| 2024-01-07T21:56:49Z
| 92
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"Italian",
"legal ruling",
"generated_from_trainer",
"base_model:papluca/xlm-roberta-base-language-detection",
"base_model:finetune:papluca/xlm-roberta-base-language-detection",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T21:55:23Z
|
---
license: mit
base_model: papluca/xlm-roberta-base-language-detection
tags:
- Italian
- legal ruling
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ribesstefano/RuleBert-v0.3-k2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ribesstefano/RuleBert-v0.3-k2
This model is a fine-tuned version of [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3031
- F1: 0.5103
- Roc Auc: 0.6747
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.452 | 0.06 | 250 | 0.3816 | 0.5227 | 0.6861 | 0.0 |
| 0.3694 | 0.12 | 500 | 0.3191 | 0.5103 | 0.6747 | 0.0 |
| 0.3523 | 0.18 | 750 | 0.3031 | 0.5103 | 0.6747 | 0.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Felladrin/onnx-TinyMistral-248M-SFT-v4
|
Felladrin
| 2024-01-07T21:56:37Z
| 6
| 0
|
transformers.js
|
[
"transformers.js",
"onnx",
"mistral",
"text-generation",
"conversational",
"base_model:Felladrin/TinyMistral-248M-Chat-v2",
"base_model:quantized:Felladrin/TinyMistral-248M-Chat-v2",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-11T20:30:04Z
|
---
license: apache-2.0
library_name: "transformers.js"
base_model: Felladrin/TinyMistral-248M-SFT-v4
---
INT8 ONNX version of [Felladrin/TinyMistral-248M-SFT-v4](https://huggingface.co/Felladrin/TinyMistral-248M-SFT-v4) to use with [Transformers.js](https://huggingface.co/docs/transformers.js).
## Usage
```js
import { pipeline } from '@xenova/transformers';
const prompt = `<|im_start|>user
Invited some friends to come home today. Give me some ideas for games to play with them!<|im_end|>
<|im_start|>assistant`;
const generator = await pipeline('text-generation', 'Felladrin/onnx-TinyMistral-248M-SFT-v4');
const output = await generator(prompt, { add_special_tokens: true, max_new_tokens: 128, do_sample: true, penalty_alpha: 0.6, top_k: 4, repetition_penalty: 1.07 });
console.log(output[0].generated_text);
// Sure! Here are a few suggestions for activities that could be added to a game night:
// 1. Play board games and puzzle games
// 2. Watch the movie trailer or gameplay videos
// 3. Play a board game tournament or tournament with friends
// 4. Watch movies on video or TV shows like Netflix, Hulu, and IFC.<|im_end|>
```
|
jysssacc/roberta-base_IA3_lr5e-05_bs4_epoch20_wd0.01
|
jysssacc
| 2024-01-07T21:51:05Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-07T17:59:14Z
|
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: roberta-base_IA3_lr5e-05_bs4_epoch20_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_IA3_lr5e-05_bs4_epoch20_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 19.9021 | 1.0 | 157 | 21.8572 |
| 19.0595 | 2.0 | 314 | 20.8634 |
| 18.1837 | 3.0 | 471 | 18.9005 |
| 14.9492 | 4.0 | 628 | 15.5693 |
| 12.6827 | 5.0 | 785 | 11.7522 |
| 9.1129 | 6.0 | 942 | 8.7107 |
| 7.9692 | 7.0 | 1099 | 6.9363 |
| 6.5195 | 8.0 | 1256 | 6.1572 |
| 5.932 | 9.0 | 1413 | 5.7736 |
| 5.8208 | 10.0 | 1570 | 5.5329 |
| 5.5728 | 11.0 | 1727 | 5.3653 |
| 5.447 | 12.0 | 1884 | 5.2402 |
| 5.2935 | 13.0 | 2041 | 5.1424 |
| 5.243 | 14.0 | 2198 | 5.0646 |
| 5.143 | 15.0 | 2355 | 5.0053 |
| 5.0448 | 16.0 | 2512 | 4.9583 |
| 5.0317 | 17.0 | 2669 | 4.9230 |
| 5.0272 | 18.0 | 2826 | 4.8999 |
| 4.9883 | 19.0 | 2983 | 4.8863 |
| 4.9729 | 20.0 | 3140 | 4.8816 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/Sao10K-Winterreise-m7-GGUF
|
s3nh
| 2024-01-07T21:46:47Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T21:12:44Z
|
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Sao10K/Winterreise-m7).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
gputrain/rl_course_vizdoom_health_gathering_supreme
|
gputrain
| 2024-01-07T21:46:19Z
| 0
| 0
|
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T21:45:56Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.44 +/- 3.57
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r gputrain/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
s3nh/Sao10K-Stheno-L2-13B-GGUF
|
s3nh
| 2024-01-07T21:46:09Z
| 17
| 1
|
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T20:55:30Z
|
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Sao10K/Stheno-L2-13B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
vkamenski/Reinforce-cart-pole-v1
|
vkamenski
| 2024-01-07T21:44:42Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-31T16:13:04Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cart-pole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 482.00 +/- 19.19
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
articblue/ppo-Pyramid
|
articblue
| 2024-01-07T21:37:42Z
| 0
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-01-07T21:37:40Z
|
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: articblue/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
andlock/ppo-LunarLander-v2
|
andlock
| 2024-01-07T21:31:03Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T21:30:43Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.43 +/- 19.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kiriku/whisper-base-en-homophone
|
kiriku
| 2024-01-07T21:26:11Z
| 89
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:kiriku/Homophones_filted_dataset",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-07T16:28:52Z
|
---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- kiriku/Homophones_filted_dataset
metrics:
- wer
model-index:
- name: Homophones_filted_dataset Whisper base en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: 'Homophones in Common Voice 15.0 '
type: kiriku/Homophones_filted_dataset
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 22.51904761904762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Homophones_filted_dataset Whisper base en
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Homophones in Common Voice 15.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4026
- Wer: 22.5190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3224 | 0.25 | 1000 | 0.4419 | 24.7690 |
| 0.3445 | 0.5 | 2000 | 0.4162 | 23.0405 |
| 0.3141 | 0.75 | 3000 | 0.4077 | 22.9357 |
| 0.3017 | 1.0 | 4000 | 0.4026 | 22.5190 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.15.0
- Tokenizers 0.15.0
|
NLPProject2023Z/xlnet_regression_corrected
|
NLPProject2023Z
| 2024-01-07T21:24:49Z
| 117
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-07T21:24:30Z
|
---
tags:
- generated_from_trainer
model-index:
- name: xlnet_regression_corrected
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet_regression_corrected
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 160 | 0.3655 |
| No log | 2.0 | 320 | 0.3655 |
| No log | 3.0 | 480 | 0.3655 |
| 0.3548 | 4.0 | 640 | 0.3655 |
| 0.3548 | 5.0 | 800 | 0.3655 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
stannisozbov/whisper-small-tr-stann
|
stannisozbov
| 2024-01-07T21:20:29Z
| 93
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"tr",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-07T00:24:25Z
|
---
language:
- tr
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
model-index:
- name: Whisper Small TR - Stann
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small TR - Stann
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2717
- eval_wer: 103.1665
- eval_runtime: 5470.4107
- eval_samples_per_second: 2.017
- eval_steps_per_second: 0.252
- epoch: 0.37
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
AlaGrine/distilgpt2-finetuned-3GPP-5G
|
AlaGrine
| 2024-01-07T21:08:04Z
| 14
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:AlaGrine/distilgpt2-finetuned-3GPP-5G",
"base_model:finetune:AlaGrine/distilgpt2-finetuned-3GPP-5G",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T18:14:48Z
|
---
license: apache-2.0
base_model: AlaGrine/distilgpt2-finetuned-3GPP-5G
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-3GPP-5G
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-3GPP-5G
This model is a fine-tuned version of [AlaGrine/distilgpt2-finetuned-3GPP-5G](https://huggingface.co/AlaGrine/distilgpt2-finetuned-3GPP-5G) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7394 | 2.7276 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Akbartus/Lora-360-model
|
Akbartus
| 2024-01-07T21:06:37Z
| 6
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-07T20:54:54Z
|
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: 360, 360 view
widget:
- text: 360 view
---
|
learn3r/longt5_xl_govreport_4096_e40
|
learn3r
| 2024-01-07T20:56:47Z
| 11
| 0
|
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:scrolls",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-06T13:52:27Z
|
---
tags:
- generated_from_trainer
datasets:
- scrolls
model-index:
- name: longt5_xl_govreport_4096_e40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_govreport_4096_e40
This model was trained from scratch on the scrolls dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0366 | 1.0 | 68 | 3.8434 |
| 0.0343 | 1.99 | 136 | 3.9506 |
| 0.0347 | 2.99 | 204 | 3.9071 |
| 0.039 | 3.99 | 272 | 3.6371 |
| 0.0367 | 4.99 | 340 | 3.8241 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
NLPProject2023Z/roberta_regression_corrected
|
NLPProject2023Z
| 2024-01-07T20:50:01Z
| 173
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-07T20:49:35Z
|
---
tags:
- generated_from_trainer
model-index:
- name: roberta_regression_corrected
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_regression_corrected
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 160 | 0.5899 |
| No log | 2.0 | 320 | 0.5899 |
| No log | 3.0 | 480 | 0.5899 |
| 0.5781 | 4.0 | 640 | 0.5899 |
| 0.5781 | 5.0 | 800 | 0.5899 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
spolivin/alz-mri-vit
|
spolivin
| 2024-01-07T20:44:50Z
| 18
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:Falah/Alzheimer_MRI",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-07T19:54:17Z
|
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: alz-mri-vit
results:
- task:
name: image-classification
type: image-classification
dataset:
name: Falah/Alzheimer_MRI
type: Falah/Alzheimer_MRI
config: default
split: train
args: default
metrics:
- name: f1
type: f1
value: 0.930865
datasets:
- Falah/Alzheimer_MRI
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alz-mri-vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on Falah/Alzheimer_MRI dataset (fine-tuning procedure is described [here](https://huggingface.co/spolivin/alz-mri-vit/blob/main/vit_finetuning.ipynb)).
It achieves the following results on the evaluation set:
- Loss: 0.1875
- F1: 0.9309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1218 | 1.0 | 64 | 0.9419 | 0.5742 |
| 0.94 | 2.0 | 128 | 0.9054 | 0.6029 |
| 0.9123 | 3.0 | 192 | 0.9019 | 0.5262 |
| 0.8625 | 4.0 | 256 | 0.8465 | 0.6029 |
| 0.8104 | 5.0 | 320 | 0.7810 | 0.6319 |
| 0.7244 | 6.0 | 384 | 0.7278 | 0.7037 |
| 0.697 | 7.0 | 448 | 0.6300 | 0.7480 |
| 0.5865 | 8.0 | 512 | 0.5659 | 0.7662 |
| 0.5199 | 9.0 | 576 | 0.5445 | 0.7721 |
| 0.4734 | 10.0 | 640 | 0.6750 | 0.7185 |
| 0.4399 | 11.0 | 704 | 0.4893 | 0.8274 |
| 0.3817 | 12.0 | 768 | 0.5578 | 0.7844 |
| 0.3318 | 13.0 | 832 | 0.4699 | 0.8228 |
| 0.3096 | 14.0 | 896 | 0.4460 | 0.8399 |
| 0.2787 | 15.0 | 960 | 0.4105 | 0.8399 |
| 0.2517 | 16.0 | 1024 | 0.3488 | 0.8578 |
| 0.2346 | 17.0 | 1088 | 0.3877 | 0.8773 |
| 0.2286 | 18.0 | 1152 | 0.3420 | 0.8575 |
| 0.1914 | 19.0 | 1216 | 0.4123 | 0.8682 |
| 0.1844 | 20.0 | 1280 | 0.2894 | 0.8913 |
| 0.173 | 21.0 | 1344 | 0.3197 | 0.8887 |
| 0.1687 | 22.0 | 1408 | 0.2626 | 0.9075 |
| 0.1601 | 23.0 | 1472 | 0.2951 | 0.9068 |
| 0.1466 | 24.0 | 1536 | 0.2666 | 0.9049 |
| 0.1468 | 25.0 | 1600 | 0.2136 | 0.9103 |
| 0.1226 | 26.0 | 1664 | 0.2387 | 0.9127 |
| 0.1186 | 27.0 | 1728 | 0.2131 | 0.9271 |
| 0.0951 | 28.0 | 1792 | 0.2520 | 0.9130 |
| 0.1049 | 29.0 | 1856 | 0.2096 | 0.9259 |
| 0.0936 | 30.0 | 1920 | 0.1875 | 0.9309 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
yy0514/llama2-7b-chat-qlora-lek-train-4-epochs-run2
|
yy0514
| 2024-01-07T20:44:24Z
| 0
| 0
| null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-07T19:52:03Z
|
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-7b-chat-qlora-lek-train-4-epochs-recheck
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-chat-qlora-lek-train-4-epochs-recheck
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
AswanthCManoj/azma-tinyllama-instruct-v2-adapter
|
AswanthCManoj
| 2024-01-07T20:37:10Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-07T20:33:08Z
|
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ostapeno/newt_adaNeo1B_wiqa_what_is_the_final_step_of_the_following_process_sbs0.5_svdemb_sgd_fu
|
ostapeno
| 2024-01-07T20:36:50Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-07T11:22:13Z
|
Number of experts present in the library: 5
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| wiqa_what_is_the_final_step_of_the_following_process_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| wiqa_what_is_the_final_step_of_the_following_process_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| wiqa_what_is_the_final_step_of_the_following_process_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| wiqa_what_is_the_final_step_of_the_following_process_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
Last updated on: 2024-01-07 20:36:50+00:00
|
s3nh/BEE-spoke-data-TinyLlama-3T-1.1bee-GGUF
|
s3nh
| 2024-01-07T20:34:46Z
| 0
| 0
|
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T20:34:46Z
|
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
LoneStriker/MistralTrix-v1-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-07T20:26:54Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T20:24:51Z
|
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
dtype: bfloat16
---
# Results:
T: 🟦
Model: CultriX/MistralTrix-v1 📑
Average: 73.39
ARC: 72.27
HellaSwag: 88.33
MMLU: 65.24
TruthfulQA: 70.73
Winogrande: 80.98
GSM8K: 62.77
# Edit/Disclaimer:
Currently the #1 ranked 7B LLM on the LLM Leaderboards, woah!
I did not expect that result at all and am in no way a professional when it comes to LLM's or computer science in general,
just a guy that likes to nerd about and tinker around.
For those wondering how I achieved this, the answer is that I simply attempted to apply the techniques outlined in this amazing article myself: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
Therefore, all credit basically goes to the guy who wrote that.
He offers the exact Colab notebook I used to train this model for free, as well as a really nice GitHub page I hope he doesn't mind me sharing: https://github.com/mlabonne/llm-course/
So huge thank you to him for sharing his knowledge and learning me a thing or two in the process!
# GGUF
I attempted to quantisize the model myself, which again I pretty much have no clue about, but it seems to run fine for me when I test them:
https://huggingface.co/CultriX/MistralTrix-v1-GGUF
I'll say it one more time though:
"I am a complete beginner to all of this, so if these do end up sucking don't be surprised."
You have been warned :)
# Description:
(trained on a single Colab GPU in less than a few hours)
MistralTrix-v1 is an zyh3826/GML-Mistral-merged-v1 model that has been further fine-tuned with Direct Preference Optimization (DPO) using Intel's dataset for neural-chat-7b-v3-1.
It surpasses the original model on several benchmarks (see results).
It is directly inspired by the RLHF process described by Intel/neural-chat-7b-v3-1's authors to improve performance.
I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on Google Colab and GitHub.
Fine-tuning took about an hour on Google Colab A-1000 GPU with 40GB VRAM.
# TRAINING SPECIFICATIONS
> LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
> Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
load_in_4bit=True
)
model.config.use_cache = False
> Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
load_in_4bit=True
)
> Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=200,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
> Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=1536,
)
|
LoneStriker/MistralTrix-v1-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-07T20:19:53Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T20:18:18Z
|
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
dtype: bfloat16
---
# Results:
T: 🟦
Model: CultriX/MistralTrix-v1 📑
Average: 73.39
ARC: 72.27
HellaSwag: 88.33
MMLU: 65.24
TruthfulQA: 70.73
Winogrande: 80.98
GSM8K: 62.77
# Edit/Disclaimer:
Currently the #1 ranked 7B LLM on the LLM Leaderboards, woah!
I did not expect that result at all and am in no way a professional when it comes to LLM's or computer science in general,
just a guy that likes to nerd about and tinker around.
For those wondering how I achieved this, the answer is that I simply attempted to apply the techniques outlined in this amazing article myself: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
Therefore, all credit basically goes to the guy who wrote that.
He offers the exact Colab notebook I used to train this model for free, as well as a really nice GitHub page I hope he doesn't mind me sharing: https://github.com/mlabonne/llm-course/
So huge thank you to him for sharing his knowledge and learning me a thing or two in the process!
# GGUF
I attempted to quantisize the model myself, which again I pretty much have no clue about, but it seems to run fine for me when I test them:
https://huggingface.co/CultriX/MistralTrix-v1-GGUF
I'll say it one more time though:
"I am a complete beginner to all of this, so if these do end up sucking don't be surprised."
You have been warned :)
# Description:
(trained on a single Colab GPU in less than a few hours)
MistralTrix-v1 is an zyh3826/GML-Mistral-merged-v1 model that has been further fine-tuned with Direct Preference Optimization (DPO) using Intel's dataset for neural-chat-7b-v3-1.
It surpasses the original model on several benchmarks (see results).
It is directly inspired by the RLHF process described by Intel/neural-chat-7b-v3-1's authors to improve performance.
I used the same dataset and reformatted it to apply the ChatML template.
The code to train this model is available on Google Colab and GitHub.
Fine-tuning took about an hour on Google Colab A-1000 GPU with 40GB VRAM.
# TRAINING SPECIFICATIONS
> LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
> Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
load_in_4bit=True
)
model.config.use_cache = False
> Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
load_in_4bit=True
)
> Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=200,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
> Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=1024,
max_length=1536,
)
|
Yntec/ResidentCNZCartoon3D
|
Yntec
| 2024-01-07T20:17:34Z
| 1,502
| 2
|
diffusers
|
[
"diffusers",
"safetensors",
"Style",
"Cartoon",
"Disney",
"Pixar",
"Dreamworks",
"residentchiefnz",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-07T19:25:13Z
|
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Style
- Cartoon
- Disney
- Pixar
- Dreamworks
- residentchiefnz
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# RCNZ Cartoon 3D v2
Original page: https://civitai.com/models/66347?modelVersionId=82547
Samples and prompts (in no particular order):

(Click for larger)
highquality, masterpiece, 1girl, Chi-Chi, close up, arms up, pink helmet, black hair, black eyes, blush, bikini armor, enjoying, aqua cape, pink gloves, pink boots, cleavage. cave, rock, mountain. blue collar
ultra high res, a cute girl, detailed, cg, zelda princess, AS Younger, cute pose
cute sister playing chess, DETAILED CHIBI, Magazine ad, iconic, 1940, from the movie, Cartoon, sharp focus. acrylic art on canvas by Disney and Clay Mann and ROSSDRAWS.
Highly detailed, High Quality, Masterpiece, beautiful, cute girl as toon plumber, blue cap, red overalls, Peach
|
ryusangwon/7243_Llama-2-13b-hf
|
ryusangwon
| 2024-01-07T20:10:43Z
| 1
| 0
|
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2024-01-07T20:10:35Z
|
---
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: 7243_Llama-2-13b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 7243_Llama-2-13b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
EscVM/gpt2-alpaca-single-gpu-train
|
EscVM
| 2024-01-07T20:09:02Z
| 91
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T18:51:27Z
|
---
license: apache-2.0
language:
- en
metrics:
- perplexity
library_name: transformers
widget:
- text: "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nWhat is life?\n\n### Response:\n"
example_title: "Example: what is life?"
---
# GPT-2 Instruction Tuning with a Single GPU 😋
The model and all artifacts in this repository are obtained by full fine-tuning the base GPT-2 model on the [Stanford Alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) with a single GPU with less than 8 GB of memory. The dataset was filtered using the `max_length` hyperparameter (maximum number of tokens per example), and gradient accumulation was applied to significantly reduce the memory footprint.
Results with the base model (124M parameters) are acceptable 😊, but further hyperparameters search and tricks could most probably lead to better results.
Surely, it is an “affordable” playground to play with this important step of the pipeline that transforms a model from an LLM to a usable and querable model.
## 1.0 Fine-Tuning Hyperparameters
| Parameter | Value |
|-----------------------------|------------------------|
| num_epochs | 35 |
| lr | 1e-5 |
| num_warmup_steps | 300 |
| weight_decay | 0.0 |
| batch_size | 16 |
| gradient_accumulation_steps | 8 |
| max_grad_norm | 2.0 |
| checkpoint_path | 'modelstore' |
| logs_path | 'logs' |
| max_length | 140 |
| eval_split | 0.1 |
| seed | 9 |
## 2.0 Examples
| Key | Value |
| -------------- | ------------ |
| temperature | 0.7 |
| do_sample | True |
| max_new_tokens | 150 |
| top_p | 0.92 |
| top_k | 0 |
### 2.1 First example
**Instruction:**
```
What is the capital of France?
```
**Response:**
```
The capital of France is Paris.
```
### 2.2 Second example
**Instruction:**
```
Give three tips to improve my writing.
```
**Response:**
```
1. Break up the sentences into smaller, more manageable chunks.
2. Use regular and non-consecutive paragraphs.
3. Include a clear goal in your writing.
```
### 2.3 Third example
**Instruction:**
```
What is life?
```
**Response:**
```
Life is like a rollercoaster ride, filled with ups and downs and twists and turns. It is a journey full of highs and lows, but there are also rewards and punishments to be found along the way. Life can be unpredictable and unpredictable, so it is important to have a plan in place to stay on track and make the most of your opportunities.
```
## 3.0 References
- **🚀 Dive into the Code:** [GitHub Repository](https://github.com/EscVM/EscVM_YT/blob/master/Notebooks/2%20-%20PT1.X%20DeepAI-Quickie/pt_1_instruction_tuning.ipynb)
- **📚 Explore the Dataset:** [Stanford Alpaca Dataset](https://huggingface.co/datasets/tatsu-lab/alpaca)
|
debisoft/stamen-wc-gan
|
debisoft
| 2024-01-07T20:04:14Z
| 0
| 0
| null |
[
"pytorch",
"huggan",
"gan",
"license:mit",
"region:us"
] | null | 2024-01-06T18:51:00Z
|
---
tags:
- huggan
- gan
# See a list of available tags here:
# https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L12
# task: unconditional-image-generation or conditional-image-generation or image-to-image
license: mit
---
# MyModelName
## Model description
Describe the model here (what it does, what it's used for, etc.)
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
```
|
LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-7.0bpw-h6-exl2
|
LoneStriker
| 2024-01-07T19:49:15Z
| 8
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:lemonilia/LimaRP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-07T18:52:55Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
license: apache-2.0
datasets:
- lemonilia/LimaRP
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
Buseak/canine_vowelizer_0706_v2_retrain_on_0701
|
Buseak
| 2024-01-07T19:46:45Z
| 75
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"canine",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-07T18:57:02Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: canine_vowelizer_0706_v2_retrain_on_0701
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine_vowelizer_0706_v2_retrain_on_0701
This model is a fine-tuned version of [Buseak/canine_vowelizer_0706_v2](https://huggingface.co/Buseak/canine_vowelizer_0706_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
- Precision: 1.0000
- Recall: 1.0000
- F1: 1.0000
- Accuracy: 0.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 488 | 0.0264 | 1.0000 | 1.0000 | 1.0000 | 0.9921 |
| 0.0548 | 2.0 | 976 | 0.0165 | 1.0000 | 1.0000 | 1.0000 | 0.9950 |
| 0.0396 | 3.0 | 1464 | 0.0109 | 1.0000 | 1.0000 | 1.0000 | 0.9966 |
| 0.0308 | 4.0 | 1952 | 0.0070 | 1.0000 | 1.0000 | 1.0000 | 0.9978 |
| 0.0243 | 5.0 | 2440 | 0.0047 | 1.0000 | 1.0000 | 1.0000 | 0.9986 |
| 0.0199 | 6.0 | 2928 | 0.0034 | 1.0000 | 1.0000 | 1.0000 | 0.9991 |
| 0.0167 | 7.0 | 3416 | 0.0024 | 1.0000 | 1.0000 | 1.0000 | 0.9993 |
| 0.0136 | 8.0 | 3904 | 0.0020 | 1.0000 | 1.0000 | 1.0000 | 0.9995 |
| 0.0121 | 9.0 | 4392 | 0.0016 | 1.0000 | 1.0000 | 1.0000 | 0.9996 |
| 0.0109 | 10.0 | 4880 | 0.0015 | 1.0000 | 1.0000 | 1.0000 | 0.9996 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
digiplay/PolyMorphMix
|
digiplay
| 2024-01-07T19:33:36Z
| 40,601
| 5
|
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-07T19:19:32Z
|
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/84887?modelVersionId=90238
|
daniel-gordon/Reinforce-Pixelcopter-PLE-v0
|
daniel-gordon
| 2024-01-07T19:30:47Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T19:30:42Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 19.90 +/- 18.44
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AlbelTec/mistral-dpo-old
|
AlbelTec
| 2024-01-07T19:26:54Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-07T19:18:43Z
|
---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TheBloke/Mistral-7B-v0.1-GPTQ
model-index:
- name: mistral-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-dpo
This model is a fine-tuned version of [TheBloke/Mistral-7B-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rewards/chosen: -2.0502
- Rewards/rejected: -28.3632
- Rewards/accuracies: 1.0
- Rewards/margins: 26.3129
- Logps/rejected: -399.8283
- Logps/chosen: -35.7179
- Logits/rejected: -2.1171
- Logits/chosen: -1.8480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6453 | 0.2 | 10 | 0.4086 | 0.1393 | -0.7001 | 1.0 | 0.8394 | -123.1976 | -13.8225 | -2.5461 | -2.5162 |
| 0.1759 | 0.4 | 20 | 0.0051 | 0.3963 | -6.4413 | 1.0 | 6.8376 | -180.6101 | -11.2527 | -2.5253 | -2.4045 |
| 0.0015 | 0.6 | 30 | 0.0000 | 0.2885 | -20.7441 | 1.0 | 21.0326 | -323.6376 | -12.3309 | -2.2440 | -1.8851 |
| 0.0 | 0.8 | 40 | 0.0000 | -0.6913 | -26.5964 | 1.0 | 25.9051 | -382.1607 | -22.1282 | -1.9054 | -1.5507 |
| 0.0 | 1.0 | 50 | 0.0000 | -1.6661 | -28.8376 | 1.0 | 27.1715 | -404.5731 | -31.8766 | -1.7581 | -1.4145 |
| 0.0 | 1.2 | 60 | 0.0000 | -2.1659 | -29.6823 | 1.0 | 27.5164 | -413.0200 | -36.8745 | -1.7071 | -1.3649 |
| 0.0 | 1.4 | 70 | 0.0000 | -2.0973 | -30.0476 | 1.0 | 27.9503 | -416.6729 | -36.1886 | -1.6955 | -1.3541 |
| 0.0 | 1.6 | 80 | 0.0000 | -2.0065 | -30.1726 | 1.0 | 28.1661 | -417.9230 | -35.2805 | -1.6941 | -1.3519 |
| 0.0 | 1.8 | 90 | 0.0000 | -1.9541 | -30.2266 | 1.0 | 28.2724 | -418.4622 | -34.7568 | -1.6935 | -1.3518 |
| 0.0023 | 2.0 | 100 | 0.0000 | -0.7061 | -30.2814 | 1.0 | 29.5753 | -419.0107 | -22.2763 | -1.7664 | -1.4215 |
| 0.0 | 2.2 | 110 | 0.0000 | -1.6234 | -29.4682 | 1.0 | 27.8448 | -410.8783 | -31.4494 | -2.0371 | -1.7164 |
| 0.0 | 2.4 | 120 | 0.0000 | -1.9528 | -28.6154 | 1.0 | 26.6626 | -402.3507 | -34.7431 | -2.0991 | -1.8126 |
| 0.0 | 2.6 | 130 | 0.0000 | -2.0210 | -28.3739 | 1.0 | 26.3529 | -399.9358 | -35.4253 | -2.1141 | -1.8394 |
| 0.0 | 2.8 | 140 | 0.0000 | -2.0443 | -28.2878 | 1.0 | 26.2435 | -399.0752 | -35.6588 | -2.1185 | -1.8487 |
| 0.0 | 3.0 | 150 | 0.0000 | -2.0504 | -28.2651 | 1.0 | 26.2147 | -398.8474 | -35.7192 | -2.1201 | -1.8510 |
| 0.0 | 3.2 | 160 | 0.0000 | -2.0500 | -28.2657 | 1.0 | 26.2157 | -398.8541 | -35.7157 | -2.1202 | -1.8519 |
| 0.0 | 3.4 | 170 | 0.0000 | -2.0530 | -28.2687 | 1.0 | 26.2157 | -398.8837 | -35.7460 | -2.1205 | -1.8521 |
| 0.0 | 3.6 | 180 | 0.0000 | -2.0529 | -28.2660 | 1.0 | 26.2131 | -398.8570 | -35.7444 | -2.1202 | -1.8515 |
| 0.0 | 3.8 | 190 | 0.0000 | -2.0531 | -28.2649 | 1.0 | 26.2119 | -398.8461 | -35.7464 | -2.1202 | -1.8519 |
| 0.0 | 4.0 | 200 | 0.0000 | -2.0579 | -28.3150 | 1.0 | 26.2571 | -399.3466 | -35.7943 | -2.1191 | -1.8507 |
| 0.0 | 4.2 | 210 | 0.0000 | -2.0509 | -28.3341 | 1.0 | 26.2832 | -399.5381 | -35.7246 | -2.1178 | -1.8487 |
| 0.0 | 4.4 | 220 | 0.0000 | -2.0516 | -28.3405 | 1.0 | 26.2889 | -399.6018 | -35.7316 | -2.1178 | -1.8490 |
| 0.0 | 4.6 | 230 | 0.0000 | -2.0516 | -28.3495 | 1.0 | 26.2979 | -399.6917 | -35.7317 | -2.1176 | -1.8489 |
| 0.0 | 4.8 | 240 | 0.0000 | -2.0508 | -28.3684 | 1.0 | 26.3176 | -399.8806 | -35.7236 | -2.1173 | -1.8488 |
| 0.0 | 5.0 | 250 | 0.0000 | -2.0502 | -28.3632 | 1.0 | 26.3129 | -399.8283 | -35.7179 | -2.1171 | -1.8480 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Ilhamben/sarcasm_detection_french_camembert
|
Ilhamben
| 2024-01-07T19:23:12Z
| 101
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T00:21:03Z
|
---
license: mit
base_model: camembert-base
tags:
- generated_from_trainer
model-index:
- name: sarcasm_detection_french_camembert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm_detection_french_camembert
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ostapeno/newt_adaNeo1B_niv2_dialogue_act_recognition_lora_sim_sgd_full_ft_CG
|
ostapeno
| 2024-01-07T19:20:54Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-07T17:21:40Z
|
Number of experts present in the library: 4
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| niv2_dialogue_act_recognition_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
Last updated on: 2024-01-07 19:20:51+00:00
|
LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-07T19:16:38Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:lemonilia/LimaRP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-07T18:52:55Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
license: apache-2.0
datasets:
- lemonilia/LimaRP
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
Felladrin/mlc-chat-llama-160m-q4f32_1
|
Felladrin
| 2024-01-07T19:13:40Z
| 0
| 1
| null |
[
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-12-14T15:12:16Z
|
---
license: apache-2.0
language:
- en
datasets:
- wikipedia
---
# JackFram's llama-160m for Web-LLM
This is a compiled version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) for [MLC Web-LLM](https://webllm.mlc.ai/), using `q4f32_1` quantization.
|
LeytonC/my_awesome_qa_model
|
LeytonC
| 2024-01-07T19:10:28Z
| 109
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-31T15:24:39Z
|
---
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 1.1613 |
| No log | 2.0 | 4 | 1.1001 |
| No log | 3.0 | 6 | 1.0620 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ostapeno/newt_adaNeo1B_high_school_psychology_svdemb_sgd_full_ft_CG
|
ostapeno
| 2024-01-07T19:10:18Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-07T15:57:30Z
|
Number of experts present in the library: 2
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| high_school_psychology | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
| high_school_psychology_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
Last updated on: 2024-01-07 19:10:16+00:00
|
daniel-gordon/PolicyGradient-Pixelcopter-PLE-v0
|
daniel-gordon
| 2024-01-07T18:51:21Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-07T18:47:23Z
|
5000 steps
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PolicyGradient-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 4.00 +/- 5.25
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Nerdofdot/nickprock_mmarco-bert-base-italian-uncased_E_5_nli
|
Nerdofdot
| 2024-01-07T18:48:35Z
| 49
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-19T13:10:15Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7975 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3987,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
wayaway/test_m
|
wayaway
| 2024-01-07T18:38:59Z
| 7
| 0
|
transformers
|
[
"transformers",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"en",
"de",
"dataset:deepset/prompt-injections",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T18:36:42Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: microsoft/deberta-v3-base
model-index:
- name: deberta-v3-base-injection
results: []
datasets:
- deepset/prompt-injections
language:
- en
- de
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-injection
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the [promp-injection](https://huggingface.co/datasets/JasperLS/prompt-injections) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0673
- Accuracy: 0.9914
## Model description
This model detects prompt injection attempts and classifies them as "INJECTION". Legitimate requests are classified as "LEGIT". The dataset assumes that legitimate requests are either all sorts of questions of key word searches.
## Intended uses & limitations
If you are using this model to secure your system and it is overly "trigger-happy" to classify requests as injections, consider collecting legitimate examples and retraining the model with the [promp-injection](https://huggingface.co/datasets/JasperLS/prompt-injections) dataset.
## Training and evaluation data
Based in the [promp-injection](https://huggingface.co/datasets/JasperLS/prompt-injections) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 69 | 0.2353 | 0.9741 |
| No log | 2.0 | 138 | 0.0894 | 0.9741 |
| No log | 3.0 | 207 | 0.0673 | 0.9914 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
li-ping/trained_model
|
li-ping
| 2024-01-07T18:32:47Z
| 1
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:intfloat/e5-mistral-7b-instruct",
"base_model:adapter:intfloat/e5-mistral-7b-instruct",
"region:us"
] | null | 2024-01-07T18:25:11Z
|
---
library_name: peft
base_model: intfloat/e5-mistral-7b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
rikhoffbauer2/naomi-makkelie-seaweed-painting-style
|
rikhoffbauer2
| 2024-01-07T18:28:07Z
| 5
| 2
|
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-07T18:28:03Z
|
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'in the style of a seaweed painting'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: in the style of a seaweed painting
license: openrail++
---
# SDXL LoRA DreamBooth - naomi-makkelie-seaweed-painting-style
<Gallery />
## Model description
### These are naomi-makkelie-seaweed-painting-style LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`naomi-makkelie-seaweed-painting-style.safetensors` here 💾](/naomi-makkelie-seaweed-painting-style/blob/main/naomi-makkelie-seaweed-painting-style.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:naomi-makkelie-seaweed-painting-style:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('naomi-makkelie-seaweed-painting-style', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('in the style of a seaweed painting').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
You should use in the style of a seaweed painting to trigger the image generation.
## Details
All [Files & versions](/naomi-makkelie-seaweed-painting-style/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. True.
Pivotal tuning was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
LoneStriker/Mixtral_34Bx2_MoE_60B-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-07T18:21:11Z
| 7
| 2
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T18:08:33Z
|
---
license: cc-by-nc-4.0
---
# Mixtral MOE 2x34B
This is my first English & Chinese MoE Model based on
* [jondurbin/bagel-dpo-34b-v0.2]
* [SUSTech/SUS-Chat-34B]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_34Bx2_MoE_60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_34Bx2_MoE_60B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
Output Examples:
```
please input prompt:write a story about yosemite
write a story about yosemite national park
Yosemite National Park is located in the Sierra Nevada Mountains of California, USA. It was established on October 1st, 1890 and covers an area of approximately 747,956 acres (302,687 hectares). The park boasts some of America's most iconic natural wonders such as Yosemite Valley, Half Dome, El Capitan, Bridalveil Fall, Tuolumne Meadows, Glacier Point, Mariposa Grove, and many more breathtaking landscapes that attract millions of visitors each year.
The history of Yosemite dates back to over seven million years ago when glaciers carved out its stunning granite cliffs and valleys. Native American tribes like Miwok and Paiute have lived here for thousands of years before European explorers arrived during the mid-nineteenth century. In fact, it was John Muir - one of America’s greatest conservationists who helped establish this region as a protected wilderness area by advocating for its preservation through his writings and activism.
Today, Yosemite offers various recreational activities including hiking, rock climbing, camping, fishing, horseback riding, wildlife watching, photography, and winter sports like skiing and snowshoeing. Visitors can also enjoy ranger programs, guided tours, educational exhibits at visitor centers, or simply take time to appreciate nature while strolling along scenic trails surrounded by towering sequoia trees, cascading waterfalls, and crystal clear lakes.
In addition to preserving these awe-inspiring vistas, Yosemite plays a crucial role in protecting numerous plant and animal species found within its boundaries. Some notable inhabitants include black bears, mountain lions, mule deer, coyotes, bobcats, golden eagles, peregrine falcons, bighorn sheep, and several types of fish native to the Merced River which runs through the heart of the valley.
As we continue our journey into the future, let us remember the importance of safeguarding places like Yosemite so they may remain pristine sanctuaries where both humans and animals alike can thrive together amidst unspoiled beauty.</s>
please input prompt:李开复是谁?
李开复是谁?
他是一个在人工智能领域有着卓越贡献的科学家,也是一位成功的企业家。他的名字与谷歌、微软等科技巨头紧密相连,他是创新工场的创始人之一,更是无数创业者心中的偶像和导师。然而,除了这些耀眼的光环之外,李开复还有着怎样的故事呢?让我们一起来揭秘这位传奇人物的人生历程吧!</s>
```
|
anismahmahi/checkpoint-177-epoch-3
|
anismahmahi
| 2024-01-07T18:16:53Z
| 89
| 0
|
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T18:15:07Z
|
---
tags:
- generated_from_trainer
model-index:
- name: checkpoint-177-epoch-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint-177-epoch-3
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sefaburak/dinov2-small-onnx
|
sefaburak
| 2024-01-07T18:16:17Z
| 0
| 0
| null |
[
"onnx",
"dino",
"vision",
"dinov2",
"feature-extraction",
"arxiv:2304.07193",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-01-07T15:12:35Z
|
---
license: apache-2.0
tags:
- dino
- vision
- onnx
- dinov2
pipeline_tag: feature-extraction
---
# ONNX Vision Transformer (small-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
The model has been exported to ONNX format.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
**Model Properties**:
* Input name: input
* Output name: output
* Input size: 224x224
* Output size: 384
## Intended uses & limitations
You can use the ONNX model for feature extraction using ONNXRuntime.
### How to use
Models can be executed using ONNXRuntime. Example usage can be found in the [dinov2_onnx repository](https://github.com/sefaburakokcu/dinov2_onnx).
```bash
git clone [email protected]:sefaburakokcu/dinov2_onnx.git
cd dinov2_onnx
python onnx_inference.py
```
Optional arguments:
- `--onnx_model`: Path to the ONNX model file (default: `./dinov2_vits14.onnx`).
- `--image_folder`: Path to the folder containing input images (default: `./inputs/`).
### BibTeX entry and citation info
```bibtex
misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
mikolaj-mialkowski/ppo-MountainCar-v0
|
mikolaj-mialkowski
| 2024-01-07T17:47:22Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T16:48:34Z
|
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
metrics:
- type: mean_reward
value: -117.00 +/- 3.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
darkavenger00/roberta-finetuned-subjqa-movies_2
|
darkavenger00
| 2024-01-07T17:45:27Z
| 97
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-07T17:30:16Z
|
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
alexandrualexandru/text-to-sparql-disertatie-t5-large-2024-01-07_16-12
|
alexandrualexandru
| 2024-01-07T17:25:08Z
| 8
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-07T16:16:29Z
|
---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
model-index:
- name: text-to-sparql-disertatie-t5-large-2024-01-07_16-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-disertatie-t5-large-2024-01-07_16-12
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
dapa93/dqn-SpaceInvadersNoFrameskip-v4
|
dapa93
| 2024-01-07T17:19:33Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T17:18:55Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 621.50 +/- 158.32
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dapa93 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dapa93 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dapa93
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ntc-ai/SDXL-LoRA-slider.bride
|
ntc-ai
| 2024-01-07T17:10:41Z
| 46
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-07T17:10:38Z
|
---
language:
- en
thumbnail: "images/evaluate/bride.../bride_17_3.0.png"
widget:
- text: bride
output:
url: images/bride_17_3.0.png
- text: bride
output:
url: images/bride_19_3.0.png
- text: bride
output:
url: images/bride_20_3.0.png
- text: bride
output:
url: images/bride_21_3.0.png
- text: bride
output:
url: images/bride_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "bride"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - bride (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/bride_17_-3.0.png" width=256 height=256 /> | <img src="images/bride_17_0.0.png" width=256 height=256 /> | <img src="images/bride_17_3.0.png" width=256 height=256 /> |
| <img src="images/bride_19_-3.0.png" width=256 height=256 /> | <img src="images/bride_19_0.0.png" width=256 height=256 /> | <img src="images/bride_19_3.0.png" width=256 height=256 /> |
| <img src="images/bride_20_-3.0.png" width=256 height=256 /> | <img src="images/bride_20_0.0.png" width=256 height=256 /> | <img src="images/bride_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
bride
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.bride', weight_name='bride.safetensors', adapter_name="bride")
# Activate the LoRA
pipe.set_adapters(["bride"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, bride"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 920+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
|
Doctor-Shotgun
| 2024-01-07T17:08:32Z
| 27
| 18
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:lemonilia/LimaRP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-04T04:54:43Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
license: apache-2.0
datasets:
- lemonilia/LimaRP
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
My current generation settings are:
```
Temperature: 1.25
Min-p: 0.05
Repetition penalty: 1.05
Repetition penalty: range 1024
```
And this seems to avoid the Mixtral looping pitfalls for me so far. Play around with it and see what works well for you.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
Quants courtesy of TheBloke:
- [GPTQ](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GPTQ)
- [GGUF](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF)
- [AWQ](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-AWQ)
Exl2 Quants courtesy of LoneStriker:
- [2.4bpw](https://huggingface.co/LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-2.4bpw-h6-exl2)
- [3.0bpw](https://huggingface.co/LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-3.0bpw-h6-exl2)
- [3.5bpw](https://huggingface.co/LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-3.5bpw-h6-exl2)
- [3.75bpw](https://huggingface.co/LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-3.75bpw-h6-exl2)
- [4.0bpw](https://huggingface.co/LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-4.0bpw-h6-exl2)
- [5.0bpw](https://huggingface.co/LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-5.0bpw-h6-exl2)
- [6.0bpw](https://huggingface.co/LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-6.0bpw-h6-exl2)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
My current templates have been uploaded to a [folder](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss/tree/main/ST%20Templates).
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
ribesstefano/RuleBert-v0.1-k3
|
ribesstefano
| 2024-01-07T16:55:45Z
| 92
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"Italian",
"legal ruling",
"generated_from_trainer",
"base_model:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"base_model:finetune:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T16:47:41Z
|
---
license: mit
base_model: classla/xlm-roberta-base-multilingual-text-genre-classifier
tags:
- Italian
- legal ruling
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ribesstefano/RuleBert-v0.1-k3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ribesstefano/RuleBert-v0.1-k3
This model is a fine-tuned version of [classla/xlm-roberta-base-multilingual-text-genre-classifier](https://huggingface.co/classla/xlm-roberta-base-multilingual-text-genre-classifier) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3285
- F1: 0.4638
- Roc Auc: 0.6576
- Accuracy: 0.0714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3423 | 0.13 | 250 | 0.3539 | 0.4497 | 0.6562 | 0.0670 |
| 0.3231 | 0.27 | 500 | 0.3425 | 0.4596 | 0.6594 | 0.0670 |
| 0.3248 | 0.4 | 750 | 0.3364 | 0.4495 | 0.6541 | 0.0714 |
| 0.3283 | 0.54 | 1000 | 0.3351 | 0.4529 | 0.6555 | 0.0714 |
| 0.3237 | 0.67 | 1250 | 0.3315 | 0.4600 | 0.6581 | 0.0625 |
| 0.325 | 0.81 | 1500 | 0.3313 | 0.4681 | 0.6624 | 0.0312 |
| 0.3316 | 0.94 | 1750 | 0.3290 | 0.4595 | 0.6564 | 0.0714 |
| 0.3239 | 1.08 | 2000 | 0.3310 | 0.4592 | 0.6572 | 0.0625 |
| 0.3085 | 1.21 | 2250 | 0.3280 | 0.4614 | 0.6567 | 0.0670 |
| 0.3161 | 1.35 | 2500 | 0.3303 | 0.4623 | 0.6574 | 0.0670 |
| 0.314 | 1.48 | 2750 | 0.3289 | 0.4613 | 0.6566 | 0.0714 |
| 0.3187 | 1.62 | 3000 | 0.3293 | 0.4594 | 0.6554 | 0.0714 |
| 0.3145 | 1.75 | 3250 | 0.3295 | 0.4629 | 0.6569 | 0.0714 |
| 0.3128 | 1.89 | 3500 | 0.3285 | 0.4629 | 0.6569 | 0.0714 |
| 0.3135 | 2.02 | 3750 | 0.3285 | 0.4615 | 0.6566 | 0.0714 |
| 0.3171 | 2.16 | 4000 | 0.3285 | 0.4638 | 0.6576 | 0.0714 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
r2429/Lua_Tiny
|
r2429
| 2024-01-07T16:51:06Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-07T16:41:12Z
|
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
adsazad/sikhchat-transformer
|
adsazad
| 2024-01-07T16:48:18Z
| 173
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T16:45:38Z
|
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sikhchat-transformer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sikhchat-transformer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4581
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.6371 | 0.0 |
| No log | 2.0 | 2 | 1.4761 | 0.5 |
| No log | 3.0 | 3 | 1.4581 | 0.5 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ribesstefano/RuleBert-v0.1-k2
|
ribesstefano
| 2024-01-07T16:46:05Z
| 91
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"Italian",
"legal ruling",
"generated_from_trainer",
"base_model:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"base_model:finetune:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T16:39:07Z
|
---
license: mit
base_model: classla/xlm-roberta-base-multilingual-text-genre-classifier
tags:
- Italian
- legal ruling
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ribesstefano/RuleBert-v0.1-k2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ribesstefano/RuleBert-v0.1-k2
This model is a fine-tuned version of [classla/xlm-roberta-base-multilingual-text-genre-classifier](https://huggingface.co/classla/xlm-roberta-base-multilingual-text-genre-classifier) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3049
- F1: 0.5103
- Roc Auc: 0.6747
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3592 | 0.14 | 250 | 0.3131 | 0.5179 | 0.6796 | 0.0 |
| 0.3369 | 0.27 | 500 | 0.3063 | 0.5109 | 0.6758 | 0.0 |
| 0.3352 | 0.41 | 750 | 0.3087 | 0.5110 | 0.6750 | 0.0 |
| 0.3283 | 0.54 | 1000 | 0.3042 | 0.5105 | 0.6749 | 0.0 |
| 0.3246 | 0.68 | 1250 | 0.3068 | 0.5101 | 0.6747 | 0.0 |
| 0.3264 | 0.82 | 1500 | 0.3028 | 0.5152 | 0.6771 | 0.0 |
| 0.3365 | 0.95 | 1750 | 0.3051 | 0.5103 | 0.6747 | 0.0 |
| 0.3269 | 1.09 | 2000 | 0.3042 | 0.5103 | 0.6747 | 0.0 |
| 0.3173 | 1.22 | 2250 | 0.3059 | 0.5103 | 0.6747 | 0.0 |
| 0.3127 | 1.36 | 2500 | 0.3053 | 0.5110 | 0.6750 | 0.0 |
| 0.3211 | 1.49 | 2750 | 0.3067 | 0.5103 | 0.6747 | 0.0 |
| 0.3155 | 1.63 | 3000 | 0.3067 | 0.5103 | 0.6747 | 0.0 |
| 0.319 | 1.77 | 3250 | 0.3051 | 0.5103 | 0.6747 | 0.0 |
| 0.3286 | 1.9 | 3500 | 0.3042 | 0.5103 | 0.6747 | 0.0 |
| 0.3243 | 2.04 | 3750 | 0.3051 | 0.5103 | 0.6747 | 0.0 |
| 0.3111 | 2.17 | 4000 | 0.3049 | 0.5103 | 0.6747 | 0.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
BallisticAI/Ballistic-CodeLlama-34B-v1
|
BallisticAI
| 2024-01-07T16:41:09Z
| 19
| 1
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code llama",
"base_model:BallisticAI/Ballistic-CodeLlama-34B-v1",
"base_model:finetune:BallisticAI/Ballistic-CodeLlama-34B-v1",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-21T00:22:16Z
|
---
license: llama2
tags:
- code llama
base_model: BallisticAI/Ballistic-CodeLlama-34B-v1
inference: false
model_creator: BallisticAI
model_type: llama
prompt_template: '### System Prompt
{system_message}
### User Message
{prompt}
### Assistant
'
quantized_by: BallisticAI
model-index:
- name: Ballistic-CodeLlama-34B-v1
results:
- task:
type: text-generation
dataset:
name: HumanEval
type: openai_humaneval
metrics:
- type: n/a
value: n/a
name: n/a
verified: false
---
# CodeLlama 34B v1
- Model creator: [BallisticAI](https://huggingface.co/BallisticAI)
- Based on: [CodeLlama 34B hf](https://huggingface.co/codellama/CodeLlama-34b-hf)
- Merged with: [CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) && [speechless-codellama-34b-v2](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0)
- Additional training with: [jondurbin/airoboros-2.2](https://huggingface.co/datasets/jondurbin/airoboros-2.2)
<!-- description start -->
## Description
This repo contains model for [Ballistic-CodeLlama-34B-v1](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model for GPU inference.](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1-AWQ)
* [GGUF model for CPU inference.](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1-GGUF)
<!-- repositories-available end -->
<!-- prompt-template start -->
## How to Prompt the Model
This model accepts the Alpaca/Vicuna instruction format.
For example:
```
### System Prompt
You are an intelligent programming assistant.
### User Message
Implement a linked list in C++
### Assistant
...
```
<!-- prompt-template end -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
## Thanks
Thanks to:
- The Original Llama team
- [Phind](https://huggingface.co/phind)
- [uukuguy](https://huggingface.co/uukuguy)
- [jondurbin](https://huggingface.co/jondurbin)
- And everyone else who's involved in the Open Source AI/ML Community.
|
ribesstefano/RuleBert-v0.1-k1
|
ribesstefano
| 2024-01-07T16:37:33Z
| 92
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"Italian",
"legal ruling",
"generated_from_trainer",
"base_model:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"base_model:finetune:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T16:29:33Z
|
---
license: mit
base_model: classla/xlm-roberta-base-multilingual-text-genre-classifier
tags:
- Italian
- legal ruling
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ribesstefano/RuleBert-v0.1-k1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ribesstefano/RuleBert-v0.1-k1
This model is a fine-tuned version of [classla/xlm-roberta-base-multilingual-text-genre-classifier](https://huggingface.co/classla/xlm-roberta-base-multilingual-text-genre-classifier) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3207
- F1: 0.4762
- Roc Auc: 0.6657
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3316 | 0.14 | 250 | 0.3375 | 0.4771 | 0.6730 | 0.0 |
| 0.3343 | 0.28 | 500 | 0.3277 | 0.4724 | 0.6700 | 0.0 |
| 0.3328 | 0.41 | 750 | 0.3235 | 0.4624 | 0.6623 | 0.0 |
| 0.3365 | 0.55 | 1000 | 0.3227 | 0.4663 | 0.6635 | 0.0 |
| 0.3257 | 0.69 | 1250 | 0.3236 | 0.4669 | 0.6633 | 0.0 |
| 0.3194 | 0.83 | 1500 | 0.3243 | 0.4912 | 0.6768 | 0.0 |
| 0.3232 | 0.97 | 1750 | 0.3223 | 0.4714 | 0.6645 | 0.0 |
| 0.3151 | 1.11 | 2000 | 0.3216 | 0.4727 | 0.6650 | 0.0 |
| 0.3229 | 1.24 | 2250 | 0.3217 | 0.4756 | 0.6665 | 0.0 |
| 0.323 | 1.38 | 2500 | 0.3237 | 0.4736 | 0.6651 | 0.0 |
| 0.3175 | 1.52 | 2750 | 0.3222 | 0.4731 | 0.6647 | 0.0 |
| 0.3133 | 1.66 | 3000 | 0.3203 | 0.4739 | 0.6651 | 0.0 |
| 0.3089 | 1.8 | 3250 | 0.3205 | 0.4751 | 0.6654 | 0.0 |
| 0.3285 | 1.94 | 3500 | 0.3208 | 0.4759 | 0.6657 | 0.0 |
| 0.3119 | 2.07 | 3750 | 0.3207 | 0.4768 | 0.6660 | 0.0 |
| 0.3169 | 2.21 | 4000 | 0.3207 | 0.4762 | 0.6657 | 0.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
lilianz/ppo-MountainCar-v0
|
lilianz
| 2024-01-07T16:24:13Z
| 2
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T16:22:14Z
|
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
metrics:
- type: mean_reward
value: -152.80 +/- 38.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ostapeno/newt_adaNeo1B_ultrachat_25_sbs0.5_svdemb_sgd_full_ft_coarsegrained
|
ostapeno
| 2024-01-07T16:23:45Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-06T19:49:47Z
|
Number of experts present in the library: 4
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| ultrachat_25_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| ultrachat_25 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| ultrachat_25_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| ultrachat_25_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
Last updated on: 2024-01-07 16:23:45+00:00
|
skverma2009/Llama-2-7b-chat-hf-fine-tuned-adapters
|
skverma2009
| 2024-01-07T16:23:02Z
| 0
| 0
|
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-07T16:22:49Z
|
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
spellingdragon/whisper-small-zh-foriegn
|
spellingdragon
| 2024-01-07T16:20:25Z
| 64
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-03T15:55:56Z
|
---
language:
- zh
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
model-index:
- name: Whisper small zh - foriegn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small zh - foriegn
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 9 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3178 | 1.54 | 3000 | 0.9294 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mikolaj-mialkowski/ppo-LunarLander-v2
|
mikolaj-mialkowski
| 2024-01-07T16:15:53Z
| 4
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-03T19:38:31Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.38 +/- 25.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-07T16:11:58Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:lemonilia/LimaRP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-07T16:00:02Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
license: apache-2.0
datasets:
- lemonilia/LimaRP
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
elijahww/mistral-7b-unmerged
|
elijahww
| 2024-01-07T16:08:10Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-07T16:06:42Z
|
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-07T16:00:00Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:lemonilia/LimaRP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-07T15:49:26Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
license: apache-2.0
datasets:
- lemonilia/LimaRP
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
Nerdofdot/nickprock_mmarco-bert-base-italian-uncased_E_1_nli
|
Nerdofdot
| 2024-01-07T15:57:53Z
| 48
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-07T15:57:37Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-3.75bpw-h6-exl2
|
LoneStriker
| 2024-01-07T15:49:23Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:lemonilia/LimaRP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-07T15:36:30Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
license: apache-2.0
datasets:
- lemonilia/LimaRP
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
ddpr/Didi
|
ddpr
| 2024-01-07T15:41:12Z
| 22
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-07T15:22:51Z
|
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: 'Fullbody photorealistic, best quality, Handsome Men, Sexy Muscular, real life, best shadow, RAW,without noise,clear ,high_resolution,8K masterpiece, photograph, 1man, handsome , minimal background, soft lighting, shadows accentuating muscles, full body, realistic skin, photographic, (best quality):1.5, Sexy Muscular, high resolution, high detailed,Masterpiece, intricate details, Best Quality), high resolution, 8k, sweating profusely, wet skin, muscular, extremely detailed, ((entire image)), soft natural light, bounce light, reflected light, highly detailed,dark_skin,handsome,pumpkins,Halloween, pumpkin head'
parameters:
negative_prompt: duplicated, cloned, duplicate body parts, extra limbs, twisted, distorted, malformed hands, malformed fingers, conjoined, missing limbs, logo, mutated, artifacts, gross, misshapen eyes, BadDreamFastNegativeV2, EasynegativeV2, UnrealisticDream, JuggernautNegative, bad-artist-anime, bad-hands-5, lowres, text, cropped, normal quality, jpeg artifacts, signature, watermark, username, blurry, bad anatomy, bad hands, error, missing fingers, extra digits, fewer digits, bad feet, bad colours, missing arms, water print, logo, 3d, anime, out of frame, small nipples, Unperfected body, unperfected face, unperfected hand, unperfected fingers, unperfected eyes, unperfected legs, ugly face, cartoon, low resolution, poorly drawn hand, poorly drawn face, body out of frame, grain, over saturated, malformed hands, extra limbs, missing limbs, mutations, two faces, duplicate, multi, two, disfigured, low-res, deformed, mutation, mutated, out of focus, long body, Dripping paint, 2 heads, 2 faces, cropped image, out of frame, deformed hands, twisted fingers, double image, malformed hands, multiple heads, extra limb, cut off, floating limbs, disconnected limbs, disgusting, poorly drawn, mutilated, mangled, extra fingers, duplicate artifacts, missing arms, mutated hands, mutilated hands, cloned face, malformed, wordmark,extra nipples, three nipples, writing, heading, verybadimagenegative_v1.3, negative_hand-neg, modern, UnrealisticDream, bad-hands-5, badhandv4, Dark shadows,ugly face, cartoon,girl,naked
output:
url: images/1704519855443.png.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: didi
---
# Didi
<Gallery />
## Trigger words
You should use `didi` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ddpr/Didi/tree/main) them in the Files & versions tab.
|
LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-07T15:36:29Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:lemonilia/LimaRP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-07T15:27:22Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
license: apache-2.0
datasets:
- lemonilia/LimaRP
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
anismahmahi/doubt_repetition_with_noPropaganda_SetFit
|
anismahmahi
| 2024-01-07T15:36:18Z
| 46
| 0
|
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-01-07T15:35:50Z
|
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: 'Texas: Cop Walks Into Home She Thought Was Hers, Kills Innocent Homeowner—Not
Arrested'
- text: Ellison subsequently agreed to dismiss his restraining order against her if
she no longer contacted him.
- text: Gina Haspel will become the new Director of the CIA, and the first woman so
chosen.
- text: At some point, the officer fired her weapon striking the victim.
- text: Ronaldo Rauseo-Ricupero, a lawyer for the Indonesians, argued they should
have 90 days to move to reopen their cases after receiving copies of their administrative
case files and time to appeal any decision rejecting those motions.
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8151016456921588
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8151 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("anismahmahi/doubt_repetition_with_noPropaganda_SetFit")
# Run inference
preds = model("At some point, the officer fired her weapon striking the victim.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 20.8138 | 129 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 5
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.3567 | - |
| 0.0209 | 50 | 0.3286 | - |
| 0.0419 | 100 | 0.2663 | - |
| 0.0628 | 150 | 0.2378 | - |
| 0.0838 | 200 | 0.1935 | - |
| 0.1047 | 250 | 0.2549 | - |
| 0.1257 | 300 | 0.2654 | - |
| 0.1466 | 350 | 0.1668 | - |
| 0.1676 | 400 | 0.1811 | - |
| 0.1885 | 450 | 0.1884 | - |
| 0.2095 | 500 | 0.157 | - |
| 0.2304 | 550 | 0.1237 | - |
| 0.2514 | 600 | 0.1318 | - |
| 0.2723 | 650 | 0.1334 | - |
| 0.2933 | 700 | 0.1067 | - |
| 0.3142 | 750 | 0.1189 | - |
| 0.3351 | 800 | 0.135 | - |
| 0.3561 | 850 | 0.0782 | - |
| 0.3770 | 900 | 0.0214 | - |
| 0.3980 | 950 | 0.0511 | - |
| 0.4189 | 1000 | 0.0924 | - |
| 0.4399 | 1050 | 0.1418 | - |
| 0.4608 | 1100 | 0.0132 | - |
| 0.4818 | 1150 | 0.0018 | - |
| 0.5027 | 1200 | 0.0706 | - |
| 0.5237 | 1250 | 0.1502 | - |
| 0.5446 | 1300 | 0.133 | - |
| 0.5656 | 1350 | 0.0207 | - |
| 0.5865 | 1400 | 0.0589 | - |
| 0.6075 | 1450 | 0.0771 | - |
| 0.6284 | 1500 | 0.0241 | - |
| 0.6494 | 1550 | 0.0905 | - |
| 0.6703 | 1600 | 0.0106 | - |
| 0.6912 | 1650 | 0.0451 | - |
| 0.7122 | 1700 | 0.0011 | - |
| 0.7331 | 1750 | 0.0075 | - |
| 0.7541 | 1800 | 0.0259 | - |
| 0.7750 | 1850 | 0.0052 | - |
| 0.7960 | 1900 | 0.0464 | - |
| 0.8169 | 1950 | 0.0039 | - |
| 0.8379 | 2000 | 0.0112 | - |
| 0.8588 | 2050 | 0.0061 | - |
| 0.8798 | 2100 | 0.0143 | - |
| 0.9007 | 2150 | 0.0886 | - |
| 0.9217 | 2200 | 0.2225 | - |
| 0.9426 | 2250 | 0.0022 | - |
| 0.9636 | 2300 | 0.0035 | - |
| 0.9845 | 2350 | 0.002 | - |
| **1.0** | **2387** | **-** | **0.2827** |
| 1.0054 | 2400 | 0.0315 | - |
| 1.0264 | 2450 | 0.0049 | - |
| 1.0473 | 2500 | 0.0305 | - |
| 1.0683 | 2550 | 0.0334 | - |
| 1.0892 | 2600 | 0.0493 | - |
| 1.1102 | 2650 | 0.0424 | - |
| 1.1311 | 2700 | 0.0011 | - |
| 1.1521 | 2750 | 0.0109 | - |
| 1.1730 | 2800 | 0.0009 | - |
| 1.1940 | 2850 | 0.0005 | - |
| 1.2149 | 2900 | 0.0171 | - |
| 1.2359 | 2950 | 0.0004 | - |
| 1.2568 | 3000 | 0.0717 | - |
| 1.2778 | 3050 | 0.0019 | - |
| 1.2987 | 3100 | 0.062 | - |
| 1.3196 | 3150 | 0.0003 | - |
| 1.3406 | 3200 | 0.0018 | - |
| 1.3615 | 3250 | 0.0011 | - |
| 1.3825 | 3300 | 0.0005 | - |
| 1.4034 | 3350 | 0.0208 | - |
| 1.4244 | 3400 | 0.0004 | - |
| 1.4453 | 3450 | 0.001 | - |
| 1.4663 | 3500 | 0.0003 | - |
| 1.4872 | 3550 | 0.0015 | - |
| 1.5082 | 3600 | 0.0004 | - |
| 1.5291 | 3650 | 0.0473 | - |
| 1.5501 | 3700 | 0.0092 | - |
| 1.5710 | 3750 | 0.032 | - |
| 1.5920 | 3800 | 0.0016 | - |
| 1.6129 | 3850 | 0.0623 | - |
| 1.6339 | 3900 | 0.0291 | - |
| 1.6548 | 3950 | 0.0386 | - |
| 1.6757 | 4000 | 0.002 | - |
| 1.6967 | 4050 | 0.0006 | - |
| 1.7176 | 4100 | 0.0005 | - |
| 1.7386 | 4150 | 0.0004 | - |
| 1.7595 | 4200 | 0.0004 | - |
| 1.7805 | 4250 | 0.0007 | - |
| 1.8014 | 4300 | 0.033 | - |
| 1.8224 | 4350 | 0.0001 | - |
| 1.8433 | 4400 | 0.0489 | - |
| 1.8643 | 4450 | 0.0754 | - |
| 1.8852 | 4500 | 0.0086 | - |
| 1.9062 | 4550 | 0.0092 | - |
| 1.9271 | 4600 | 0.0591 | - |
| 1.9481 | 4650 | 0.0013 | - |
| 1.9690 | 4700 | 0.0043 | - |
| 1.9899 | 4750 | 0.0338 | - |
| 2.0 | 4774 | - | 0.3304 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
cibernicola/FLOR-6.3B-xat-Q5_K
|
cibernicola
| 2024-01-07T15:28:29Z
| 11
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gguf",
"bloom",
"text-generation",
"finetune",
"chatml",
"gpt4",
"catalan",
"Q8",
"ca",
"en",
"es",
"dataset:xaviviro/oasst2_ca_gpt",
"base_model:projecte-aina/FLOR-6.3B",
"base_model:quantized:projecte-aina/FLOR-6.3B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T15:24:09Z
|
---
license: apache-2.0
base_model: projecte-aina/FLOR-6.3B
datasets:
- xaviviro/oasst2_ca_gpt
tags:
- finetune
- chatml
- gpt4
- catalan
- Q8
model-index:
- name: FLOR-6.3B-xat
results: []
library_name: transformers
widget:
- text: |
<|im_start|>user
Qui va ser Isaac Newton?<|im_end|>
<|im_start|>assistant
language:
- ca
- en
- es
---
# FLOR-6.3B-xat-Q5_K
FLOR-6.3B-xat-Q5_K és la versió quantitzada del model [FLOR-6.3B-xat](/xaviviro/FLOR-6.3B-xat) d'en [xaviviro](/xaviviro)
# Prompt Template
FLOR-6.3B-xat-Q5_K usa **ChatML** com a prompt template:
```
<|im_start|>user
Qui va ser Isaac Newton?<|im_end|>
<|im_start|>assistant\n
```
|
LoneStriker/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-07T15:27:19Z
| 3
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:lemonilia/LimaRP",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-07T15:19:25Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
license: apache-2.0
datasets:
- lemonilia/LimaRP
---
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
|
christinacdl/Mistral-LoRA-Hate-Target-Detection-new
|
christinacdl
| 2024-01-07T15:24:42Z
| 1
| 0
|
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-07T00:56:30Z
|
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: Mistral-LoRA-Hate-Target-Detection-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-LoRA-Hate-Target-Detection-new
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7969
- Micro F1: 0.8409
- Macro F1: 0.6542
- Accuracy: 0.8409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.15.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
ribesstefano/RuleBert-v0.0-k0
|
ribesstefano
| 2024-01-07T15:24:36Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"Italian",
"legal ruling",
"generated_from_trainer",
"base_model:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"base_model:finetune:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-07T15:09:17Z
|
---
license: mit
base_model: classla/xlm-roberta-base-multilingual-text-genre-classifier
tags:
- Italian
- legal ruling
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ribesstefano/RuleBert-v0.0-k0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ribesstefano/RuleBert-v0.0-k0
This model is a fine-tuned version of [classla/xlm-roberta-base-multilingual-text-genre-classifier](https://huggingface.co/classla/xlm-roberta-base-multilingual-text-genre-classifier) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3777
- F1: 0.5004
- Roc Auc: 0.6722
- Accuracy: 0.0375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3811 | 0.88 | 50 | 0.3716 | 0.4904 | 0.6685 | 0.0583 |
| 0.3257 | 1.77 | 100 | 0.3708 | 0.4953 | 0.6701 | 0.0583 |
| 0.3178 | 2.65 | 150 | 0.3745 | 0.4977 | 0.6712 | 0.0417 |
| 0.3091 | 3.54 | 200 | 0.3750 | 0.4989 | 0.6719 | 0.0417 |
| 0.3115 | 4.42 | 250 | 0.3768 | 0.5007 | 0.6724 | 0.0417 |
| 0.3092 | 5.31 | 300 | 0.3762 | 0.5021 | 0.6727 | 0.0458 |
| 0.3057 | 6.19 | 350 | 0.3772 | 0.5005 | 0.6723 | 0.0375 |
| 0.3062 | 7.08 | 400 | 0.3777 | 0.5002 | 0.6721 | 0.0417 |
| 0.3086 | 7.96 | 450 | 0.3777 | 0.5005 | 0.6723 | 0.0417 |
| 0.3075 | 8.85 | 500 | 0.3777 | 0.5004 | 0.6722 | 0.0375 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Sharathhebbar24/convo_bot_gpt2
|
Sharathhebbar24
| 2024-01-07T15:16:01Z
| 176
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:vicgalle/alpaca-gpt4",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-29T11:23:02Z
|
---
license: apache-2.0
datasets:
- vicgalle/alpaca-gpt4
language:
- en
---
This model is a finetuned version of ```gpt2``` using ```vicgalle/alpaca-gpt4```
## Model description
GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This
means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the
predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a
prompt.
### To use this model
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> model_name = "Sharathhebbar24/convo_bot_gpt2"
>>> model = AutoModelForCausalLM.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> def generate_text(prompt):
>>> inputs = tokenizer.encode(prompt, return_tensors='pt')
>>> outputs = mod1.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
>>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> return generated
>>> prompt = """
>>> Below is an instruction that describes a task. Write a response that appropriately completes the request.
>>> ### Instruction: Who is the world's most famous painter?
>>> ###
>>> """
>>> res = generate_text(prompt)
>>> res
```
|
cibernicola/FLOR-1.3B-xat-Q8
|
cibernicola
| 2024-01-07T15:01:52Z
| 3
| 0
|
transformers
|
[
"transformers",
"gguf",
"bloom",
"text-generation",
"finetune",
"chatml",
"gpt4",
"catalan",
"Q8",
"ca",
"en",
"es",
"dataset:xaviviro/oasst2_ca_gpt",
"base_model:projecte-aina/FLOR-6.3B",
"base_model:quantized:projecte-aina/FLOR-6.3B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T14:57:46Z
|
---
license: apache-2.0
base_model: projecte-aina/FLOR-6.3B
datasets:
- xaviviro/oasst2_ca_gpt
tags:
- finetune
- chatml
- gpt4
- catalan
- Q8
model-index:
- name: FLOR-6.3B-xat
results: []
library_name: transformers
widget:
- text: |
<|im_start|>user
Qui va ser Isaac Newton?<|im_end|>
<|im_start|>assistant
language:
- ca
- en
- es
---
# FLOR-6.3B-xat-Q8_0
FLOR-1.3B-xat és la versió quantitzada del model [FLOR-1.3B-xat](/xaviviro/FLOR-1.3B-xat) d'en [xaviviro](/xaviviro)
# Prompt Template
FLOR-1.3B-xat usa **ChatML** com a prompt template:
```
<|im_start|>user
Qui va ser Isaac Newton?<|im_end|>
<|im_start|>assistant\n
```
|
spasis/ppo-LunarLander-v2
|
spasis
| 2024-01-07T14:55:13Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T14:54:52Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.67 +/- 18.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Destiny0621/dqn-SpaceInvadersNoFrameskip-v4
|
Destiny0621
| 2024-01-07T14:38:55Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-07T14:38:28Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 373.00 +/- 93.15
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Destiny0621 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Destiny0621 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Destiny0621
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Vishu7797/bert-finetuned-ner
|
Vishu7797
| 2024-01-07T14:38:15Z
| 91
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-07T14:16:21Z
|
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0618
- Precision: 0.9377
- Recall: 0.9517
- F1: 0.9446
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0701 | 1.0 | 1756 | 0.0678 | 0.9101 | 0.9349 | 0.9223 | 0.9814 |
| 0.033 | 2.0 | 3512 | 0.0646 | 0.9353 | 0.9505 | 0.9428 | 0.9863 |
| 0.0213 | 3.0 | 5268 | 0.0618 | 0.9377 | 0.9517 | 0.9446 | 0.9863 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mtc/mistralai-Mistral-7B-v0.1-arxiv-summarization-5000-v3-qlora-4bit
|
mtc
| 2024-01-07T14:36:03Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-07T14:35:21Z
|
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Floyd93/Grammar_Jan_07
|
Floyd93
| 2024-01-07T14:33:44Z
| 174
| 0
|
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-07T13:24:11Z
|
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: Grammar_Jan_07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Grammar_Jan_07
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 13.5291 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
daniel123321/whisper-small-de-colab
|
daniel123321
| 2024-01-07T14:27:47Z
| 7
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ger",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-06T00:56:38Z
|
---
language:
- ger
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Small Ger - Daniel Dumschat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ger - Daniel Dumschat
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5311
- eval_wer: 91.3258
- eval_runtime: 990.1974
- eval_samples_per_second: 1.624
- eval_steps_per_second: 0.203
- epoch: 0.29
- step: 900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/Sao10K-Sensualize-Solar-10.7B-GGUF
|
s3nh
| 2024-01-07T14:26:48Z
| 4
| 0
|
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T13:31:50Z
|
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
s3nh/Edentns-DataVortexM-7B-Instruct-v0.1-GGUF
|
s3nh
| 2024-01-07T14:25:12Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T13:15:06Z
|
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Edentns/DataVortexM-7B-Instruct-v0.1).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.