modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 06:28:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 06:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
casque/Micro_Bikini_By_Stable_Yogi | casque | 2024-06-06T03:01:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-06T03:00:53Z | ---
license: creativeml-openrail-m
---
|
blackhole33/llama3-uzbv1 | blackhole33 | 2024-06-06T02:54:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T08:44:21Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: llama-2-7b-chat-bnb-4bit
---
# Uploaded model
- **Developed by:** blackhole33
- **License:** apache-2.0
- **Finetuned from model :** llama-2-7b-chat-bnb-4bit
|
tsavage68/UTI_M2_1000steps_1e7rate_SFT | tsavage68 | 2024-06-06T02:47:28Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-29T15:05:10Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: UTI_M2_1000steps_1e7rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UTI_M2_1000steps_1e7rate_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.4021 | 0.3333 | 25 | 2.3941 |
| 2.3235 | 0.6667 | 50 | 2.2471 |
| 2.0863 | 1.0 | 75 | 1.9386 |
| 1.6662 | 1.3333 | 100 | 1.5791 |
| 1.2956 | 1.6667 | 125 | 1.2544 |
| 1.214 | 2.0 | 150 | 1.2116 |
| 1.202 | 2.3333 | 175 | 1.1861 |
| 1.1813 | 2.6667 | 200 | 1.1668 |
| 1.1696 | 3.0 | 225 | 1.1528 |
| 1.1052 | 3.3333 | 250 | 1.1412 |
| 1.0614 | 3.6667 | 275 | 1.1329 |
| 1.1106 | 4.0 | 300 | 1.1271 |
| 1.1019 | 4.3333 | 325 | 1.1228 |
| 1.0691 | 4.6667 | 350 | 1.1212 |
| 1.0947 | 5.0 | 375 | 1.1153 |
| 1.0689 | 5.3333 | 400 | 1.1134 |
| 1.0598 | 5.6667 | 425 | 1.1116 |
| 1.0459 | 6.0 | 450 | 1.1111 |
| 1.0518 | 6.3333 | 475 | 1.1097 |
| 1.045 | 6.6667 | 500 | 1.1092 |
| 1.0658 | 7.0 | 525 | 1.1066 |
| 1.0706 | 7.3333 | 550 | 1.1067 |
| 1.0514 | 7.6667 | 575 | 1.1057 |
| 1.0412 | 8.0 | 600 | 1.1063 |
| 1.0455 | 8.3333 | 625 | 1.1052 |
| 0.9657 | 8.6667 | 650 | 1.1057 |
| 1.1015 | 9.0 | 675 | 1.1052 |
| 1.0294 | 9.3333 | 700 | 1.1051 |
| 1.0399 | 9.6667 | 725 | 1.1052 |
| 1.1125 | 10.0 | 750 | 1.1047 |
| 1.0219 | 10.3333 | 775 | 1.1046 |
| 0.9862 | 10.6667 | 800 | 1.1048 |
| 1.0682 | 11.0 | 825 | 1.1049 |
| 1.0587 | 11.3333 | 850 | 1.1049 |
| 1.0217 | 11.6667 | 875 | 1.1051 |
| 1.0547 | 12.0 | 900 | 1.1047 |
| 1.0047 | 12.3333 | 925 | 1.1047 |
| 1.021 | 12.6667 | 950 | 1.1047 |
| 1.0528 | 13.0 | 975 | 1.1047 |
| 1.0385 | 13.3333 | 1000 | 1.1047 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
oproprioaquino/Stable_LoRA | oproprioaquino | 2024-06-06T02:46:42Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:mit",
"region:us"
] | text-to-image | 2024-06-06T01:05:15Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: Pimentão color model
parameters:
negative_prompt: oiticica
output:
url: images/prancheta001.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: Modelo Aqq
license: mit
---
# LoRA Aqq
<Gallery />
## Model description
Um modelo para teste de ferramenta
## Trigger words
You should use `Modelo Aqq` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/oproprioaquino/Stable_LoRA/tree/main) them in the Files & versions tab.
|
Mikask/bdc2024-indobert-2 | Mikask | 2024-06-06T02:46:38Z | 183 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-06T02:46:03Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bdc2024-indobert-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bdc2024-indobert-2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5554
- Accuracy: 0.9331
- Balanced Accuracy: 0.8724
- Precision: 0.9353
- Recall: 0.9331
- F1: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Balanced Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:---------:|:------:|:------:|
| No log | 1.0 | 483 | 0.5108 | 0.9197 | 0.8454 | 0.9195 | 0.9197 | 0.9132 |
| 0.0773 | 2.0 | 966 | 0.5374 | 0.9235 | 0.8668 | 0.9266 | 0.9235 | 0.9196 |
| 0.0374 | 3.0 | 1449 | 0.5451 | 0.9331 | 0.8689 | 0.9359 | 0.9331 | 0.9281 |
| 0.0242 | 4.0 | 1932 | 0.5567 | 0.9331 | 0.8726 | 0.9353 | 0.9331 | 0.9288 |
| 0.0162 | 5.0 | 2415 | 0.5554 | 0.9331 | 0.8724 | 0.9353 | 0.9331 | 0.9289 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.13.3
|
tsavage68/UTI_M2_1000steps_1e8rate_SFT | tsavage68 | 2024-06-06T02:45:33Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-29T15:07:27Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: UTI_M2_1000steps_1e8rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UTI_M2_1000steps_1e8rate_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.6263 | 0.3333 | 25 | 2.6302 |
| 2.6415 | 0.6667 | 50 | 2.6306 |
| 2.6336 | 1.0 | 75 | 2.6298 |
| 2.6804 | 1.3333 | 100 | 2.6284 |
| 2.5885 | 1.6667 | 125 | 2.6291 |
| 2.542 | 2.0 | 150 | 2.6286 |
| 2.6601 | 2.3333 | 175 | 2.6249 |
| 2.6936 | 2.6667 | 200 | 2.6240 |
| 2.6593 | 3.0 | 225 | 2.6238 |
| 2.6088 | 3.3333 | 250 | 2.6237 |
| 2.542 | 3.6667 | 275 | 2.6208 |
| 2.5635 | 4.0 | 300 | 2.6185 |
| 2.6825 | 4.3333 | 325 | 2.6183 |
| 2.5987 | 4.6667 | 350 | 2.6198 |
| 2.6421 | 5.0 | 375 | 2.6196 |
| 2.6426 | 5.3333 | 400 | 2.6183 |
| 2.6517 | 5.6667 | 425 | 2.6189 |
| 2.5762 | 6.0 | 450 | 2.6190 |
| 2.6801 | 6.3333 | 475 | 2.6190 |
| 2.5918 | 6.6667 | 500 | 2.6169 |
| 2.6354 | 7.0 | 525 | 2.6163 |
| 2.6433 | 7.3333 | 550 | 2.6156 |
| 2.7024 | 7.6667 | 575 | 2.6172 |
| 2.575 | 8.0 | 600 | 2.6176 |
| 2.6109 | 8.3333 | 625 | 2.6165 |
| 2.6088 | 8.6667 | 650 | 2.6195 |
| 2.698 | 9.0 | 675 | 2.6185 |
| 2.612 | 9.3333 | 700 | 2.6166 |
| 2.615 | 9.6667 | 725 | 2.6188 |
| 2.681 | 10.0 | 750 | 2.6165 |
| 2.6118 | 10.3333 | 775 | 2.6190 |
| 2.5875 | 10.6667 | 800 | 2.6168 |
| 2.6859 | 11.0 | 825 | 2.6170 |
| 2.5953 | 11.3333 | 850 | 2.6171 |
| 2.5884 | 11.6667 | 875 | 2.6169 |
| 2.6721 | 12.0 | 900 | 2.6170 |
| 2.5951 | 12.3333 | 925 | 2.6170 |
| 2.6112 | 12.6667 | 950 | 2.6170 |
| 2.5842 | 13.0 | 975 | 2.6170 |
| 2.5962 | 13.3333 | 1000 | 2.6170 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
richardkelly/google-gemma-7b-1717613194 | richardkelly | 2024-06-06T02:44:55Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T18:46:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF | UnclearPWR | 2024-06-06T02:44:39Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"climate",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:NuclearnAI/SPARK-mini-instruct",
"base_model:quantized:NuclearnAI/SPARK-mini-instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-06-06T02:44:29Z | ---
language:
- en
library_name: transformers
tags:
- climate
- llama-cpp
- gguf-my-repo
base_model: NuclearnAI/SPARK-mini-instruct
pipeline_tag: text-generation
---
# UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`NuclearnAI/SPARK-mini-instruct`](https://huggingface.co/NuclearnAI/SPARK-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NuclearnAI/SPARK-mini-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF --hf-file spark-mini-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF --hf-file spark-mini-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF --hf-file spark-mini-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF --hf-file spark-mini-instruct-q6_k.gguf -c 2048
```
|
OrcinusOrca/q-Taxi-v3 | OrcinusOrca | 2024-06-06T02:43:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-06-06T02:43:03Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="OrcinusOrca/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tsavage68/UTI_M2_1000steps_1e6rate_SFT | tsavage68 | 2024-06-06T02:42:22Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-29T13:16:01Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: UTI_M2_1000steps_1e6rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UTI_M2_1000steps_1e6rate_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.2167 | 0.3333 | 25 | 1.1865 |
| 0.9806 | 0.6667 | 50 | 0.9618 |
| 0.936 | 1.0 | 75 | 0.9371 |
| 0.8294 | 1.3333 | 100 | 0.9512 |
| 0.8273 | 1.6667 | 125 | 0.9369 |
| 0.7851 | 2.0 | 150 | 0.9036 |
| 0.5263 | 2.3333 | 175 | 0.9990 |
| 0.5512 | 2.6667 | 200 | 0.9589 |
| 0.5272 | 3.0 | 225 | 0.9576 |
| 0.2888 | 3.3333 | 250 | 1.1371 |
| 0.2968 | 3.6667 | 275 | 1.1164 |
| 0.3381 | 4.0 | 300 | 1.1144 |
| 0.1802 | 4.3333 | 325 | 1.1697 |
| 0.2025 | 4.6667 | 350 | 1.1946 |
| 0.2273 | 5.0 | 375 | 1.2614 |
| 0.1417 | 5.3333 | 400 | 1.3260 |
| 0.1524 | 5.6667 | 425 | 1.3343 |
| 0.136 | 6.0 | 450 | 1.3735 |
| 0.117 | 6.3333 | 475 | 1.3843 |
| 0.1284 | 6.6667 | 500 | 1.3742 |
| 0.1172 | 7.0 | 525 | 1.4114 |
| 0.0905 | 7.3333 | 550 | 1.5000 |
| 0.1027 | 7.6667 | 575 | 1.5142 |
| 0.097 | 8.0 | 600 | 1.4912 |
| 0.0837 | 8.3333 | 625 | 1.5974 |
| 0.0832 | 8.6667 | 650 | 1.6185 |
| 0.0781 | 9.0 | 675 | 1.6203 |
| 0.0698 | 9.3333 | 700 | 1.6833 |
| 0.0722 | 9.6667 | 725 | 1.6960 |
| 0.0681 | 10.0 | 750 | 1.7139 |
| 0.0635 | 10.3333 | 775 | 1.7732 |
| 0.0654 | 10.6667 | 800 | 1.7704 |
| 0.0663 | 11.0 | 825 | 1.7647 |
| 0.0604 | 11.3333 | 850 | 1.7840 |
| 0.0628 | 11.6667 | 875 | 1.7916 |
| 0.0627 | 12.0 | 900 | 1.7947 |
| 0.061 | 12.3333 | 925 | 1.7962 |
| 0.062 | 12.6667 | 950 | 1.7967 |
| 0.0607 | 13.0 | 975 | 1.7960 |
| 0.0605 | 13.3333 | 1000 | 1.7960 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
suthanhcong/distilbert-base-uncased-finetuned-DouRC | suthanhcong | 2024-06-06T02:41:18Z | 128 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-06-05T07:12:29Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased-finetuned-DouRC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-DouRC
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6662
- Exact Match: 0.05
- F1: 0.0270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:------:|
| 5.6355 | 1.0 | 10 | 5.3658 | 0.05 | 0.0264 |
| 5.083 | 2.0 | 20 | 4.9659 | 0.035 | 0.0164 |
| 4.6944 | 3.0 | 30 | 4.7939 | 0.03 | 0.0129 |
| 4.4803 | 4.0 | 40 | 4.6992 | 0.04 | 0.0207 |
| 4.389 | 5.0 | 50 | 4.6662 | 0.05 | 0.0270 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
tsavage68/UTI_M2_1000steps_1e5rate_SFT | tsavage68 | 2024-06-06T02:40:49Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-29T14:06:56Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: UTI_M2_1000steps_1e5rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UTI_M2_1000steps_1e5rate_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 1.0555 | 0.3333 | 25 | 0.9919 |
| 1.0461 | 0.6667 | 50 | 1.0778 |
| 1.1438 | 1.0 | 75 | 1.4514 |
| 0.925 | 1.3333 | 100 | 1.1656 |
| 0.9875 | 1.6667 | 125 | 1.1640 |
| 0.9859 | 2.0 | 150 | 1.6609 |
| 0.7898 | 2.3333 | 175 | 1.2420 |
| 0.7246 | 2.6667 | 200 | 1.2557 |
| 0.7078 | 3.0 | 225 | 1.1831 |
| 0.4316 | 3.3333 | 250 | 1.3381 |
| 0.4621 | 3.6667 | 275 | 1.3760 |
| 0.5094 | 4.0 | 300 | 1.3134 |
| 0.2873 | 4.3333 | 325 | 1.3968 |
| 0.267 | 4.6667 | 350 | 1.5584 |
| 0.292 | 5.0 | 375 | 1.4604 |
| 0.1967 | 5.3333 | 400 | 1.5440 |
| 0.2125 | 5.6667 | 425 | 1.5934 |
| 0.2141 | 6.0 | 450 | 1.5512 |
| 0.1391 | 6.3333 | 475 | 1.6320 |
| 0.1735 | 6.6667 | 500 | 1.6144 |
| 0.1688 | 7.0 | 525 | 1.6714 |
| 0.1265 | 7.3333 | 550 | 1.6959 |
| 0.1334 | 7.6667 | 575 | 1.6998 |
| 0.1245 | 8.0 | 600 | 1.7298 |
| 0.1066 | 8.3333 | 625 | 1.7505 |
| 0.0982 | 8.6667 | 650 | 1.7773 |
| 0.1014 | 9.0 | 675 | 1.8197 |
| 0.0829 | 9.3333 | 700 | 1.8606 |
| 0.0774 | 9.6667 | 725 | 1.8651 |
| 0.0846 | 10.0 | 750 | 1.8653 |
| 0.0739 | 10.3333 | 775 | 1.9064 |
| 0.0786 | 10.6667 | 800 | 1.9323 |
| 0.0691 | 11.0 | 825 | 1.9367 |
| 0.0648 | 11.3333 | 850 | 1.9448 |
| 0.0649 | 11.6667 | 875 | 1.9546 |
| 0.0672 | 12.0 | 900 | 1.9559 |
| 0.06 | 12.3333 | 925 | 1.9592 |
| 0.0606 | 12.6667 | 950 | 1.9597 |
| 0.0606 | 13.0 | 975 | 1.9597 |
| 0.0601 | 13.3333 | 1000 | 1.9597 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
v8karlo/UNCENSORED-TinyDolphin-3x-MoE-Q4_K_M-GGUF | v8karlo | 2024-06-06T02:36:55Z | 7 | 1 | null | [
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"cognitivecomputations/TinyDolphin-2.8.1-1.1b",
"llama-cpp",
"gguf-my-repo",
"base_model:jtatman/TinyDolphin-3x-MoE",
"base_model:quantized:jtatman/TinyDolphin-3x-MoE",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T02:32:24Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- cognitivecomputations/TinyDolphin-2.8.1-1.1b
- llama-cpp
- gguf-my-repo
base_model: jtatman/TinyDolphin-3x-MoE
---
# v8karlo/TinyDolphin-3x-MoE-Q4_K_M-GGUF
This model was converted to GGUF format from [`jtatman/TinyDolphin-3x-MoE`](https://huggingface.co/jtatman/TinyDolphin-3x-MoE) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jtatman/TinyDolphin-3x-MoE) for more details on the model.
cONVERT sAFETENSORS TO gguf .
https://huggingface.co/spaces/ggml-org/gguf-my-repo .
 .
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo v8karlo/TinyDolphin-3x-MoE-Q4_K_M-GGUF --hf-file tinydolphin-3x-moe-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo v8karlo/TinyDolphin-3x-MoE-Q4_K_M-GGUF --hf-file tinydolphin-3x-moe-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo v8karlo/TinyDolphin-3x-MoE-Q4_K_M-GGUF --hf-file tinydolphin-3x-moe-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo v8karlo/TinyDolphin-3x-MoE-Q4_K_M-GGUF --hf-file tinydolphin-3x-moe-q4_k_m.gguf -c 2048
```
|
UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF | UnclearPWR | 2024-06-06T02:35:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"climate",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:NuclearnAI/SPARK-mini-instruct",
"base_model:quantized:NuclearnAI/SPARK-mini-instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-06-06T02:35:36Z | ---
language:
- en
library_name: transformers
tags:
- climate
- llama-cpp
- gguf-my-repo
base_model: NuclearnAI/SPARK-mini-instruct
pipeline_tag: text-generation
---
# UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`NuclearnAI/SPARK-mini-instruct`](https://huggingface.co/NuclearnAI/SPARK-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NuclearnAI/SPARK-mini-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF --hf-file spark-mini-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF --hf-file spark-mini-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF --hf-file spark-mini-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF --hf-file spark-mini-instruct-q8_0.gguf -c 2048
```
|
haihuynh/IMDB-Bert-CLSModel-v1 | haihuynh | 2024-06-06T02:31:53Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-06T02:31:40Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: IMDB-Bert-CLSModel-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB-Bert-CLSModel-v1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4872
- Accuracy: 0.8126
- F1: 0.8125
- Precision: 0.8135
- Recall: 0.8127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5671 | 0.6394 | 500 | 0.7079 | 0.4995 | 0.3347 | 0.7495 | 0.5009 |
| 0.7017 | 1.2788 | 1000 | 0.6467 | 0.5744 | 0.4837 | 0.7578 | 0.5755 |
| 0.5909 | 1.9182 | 1500 | 0.4872 | 0.8126 | 0.8125 | 0.8135 | 0.8127 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
bella05/pogny-16-0.001 | bella05 | 2024-06-06T02:30:13Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-06T02:09:36Z | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: pogny-16-0.001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pogny-16-0.001
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6847
- Accuracy: 0.4376
- F1: 0.2665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.7729 | 1.0 | 4818 | 1.8008 | 0.4376 | 0.2665 |
| 1.7377 | 2.0 | 9636 | 1.7499 | 0.4376 | 0.2665 |
| 1.6614 | 3.0 | 14454 | 1.6847 | 0.4376 | 0.2665 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0a0+b5021ba
- Datasets 2.6.2
- Tokenizers 0.14.1
|
ari2002/lora_model | ari2002 | 2024-06-06T02:27:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T02:27:25Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** ari2002
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
suthanhcong/results | suthanhcong | 2024-06-06T02:27:41Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-06-06T02:17:01Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
wwe180/L3-8B-LingYang-v1-Q6_K-GGUF | wwe180 | 2024-06-06T02:22:33Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"Llama3",
"llama-cpp",
"gguf-my-repo",
"base_model:wwe180/L3-8B-LingYang-v1",
"base_model:quantized:wwe180/L3-8B-LingYang-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-06T02:20:32Z | ---
license:
- other
library_name: transformers
tags:
- mergekit
- merge
- Llama3
- llama-cpp
- gguf-my-repo
base_model: wwe180/L3-8B-LingYang-v1
---
# wwe180/L3-8B-LingYang-v1-Q6_K-GGUF
This model was converted to GGUF format from [`wwe180/L3-8B-LingYang-v1`](https://huggingface.co/wwe180/L3-8B-LingYang-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wwe180/L3-8B-LingYang-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo wwe180/L3-8B-LingYang-v1-Q6_K-GGUF --hf-file l3-8b-lingyang-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo wwe180/L3-8B-LingYang-v1-Q6_K-GGUF --hf-file l3-8b-lingyang-v1-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo wwe180/L3-8B-LingYang-v1-Q6_K-GGUF --hf-file l3-8b-lingyang-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo wwe180/L3-8B-LingYang-v1-Q6_K-GGUF --hf-file l3-8b-lingyang-v1-q6_k.gguf -c 2048
```
|
RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf | RichardErkhov | 2024-06-06T02:16:23Z | 50 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T02:05:31Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-conversational-or-qa - GGUF
- Model creator: https://huggingface.co/Locutusque/
- Original model: https://huggingface.co/Locutusque/gpt2-conversational-or-qa/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-conversational-or-qa.Q2_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q2_K.gguf) | Q2_K | 0.08GB |
| [gpt2-conversational-or-qa.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gpt2-conversational-or-qa.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gpt2-conversational-or-qa.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gpt2-conversational-or-qa.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gpt2-conversational-or-qa.Q3_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q3_K.gguf) | Q3_K | 0.09GB |
| [gpt2-conversational-or-qa.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gpt2-conversational-or-qa.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [gpt2-conversational-or-qa.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [gpt2-conversational-or-qa.Q4_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gpt2-conversational-or-qa.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gpt2-conversational-or-qa.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gpt2-conversational-or-qa.Q4_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q4_K.gguf) | Q4_K | 0.11GB |
| [gpt2-conversational-or-qa.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [gpt2-conversational-or-qa.Q4_1.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q4_1.gguf) | Q4_1 | 0.11GB |
| [gpt2-conversational-or-qa.Q5_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gpt2-conversational-or-qa.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gpt2-conversational-or-qa.Q5_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q5_K.gguf) | Q5_K | 0.12GB |
| [gpt2-conversational-or-qa.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gpt2-conversational-or-qa.Q5_1.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gpt2-conversational-or-qa.Q6_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q6_K.gguf) | Q6_K | 0.13GB |
| [gpt2-conversational-or-qa.Q8_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_gpt2-conversational-or-qa-gguf/blob/main/gpt2-conversational-or-qa.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
license: openrail
datasets:
- Locutusque/ColumnedChatCombined
language:
- en
metrics:
- bleu
- perplexity
- loss
- reward
- penalty
widget:
- text: '<|USER|> Hello! <|ASSISTANT|> '
pipeline_tag: conversational
inference:
parameters:
temperature: 0.5
do_sample: True
top_p: 0.5
top_k: 30
max_new_tokens: 250
repetition_penalty: 1.15
---
# Model Card
* this model is deprecated please see https://huggingface.co/Locutusque/gpt2-conversational-retrain for a better performing model. *
## Model Details
- Model Name: gpt2-conversational-or-qa (prototype)
- Model Type: Language Modeling
- Task: Generating Conversational Responses
- Hardware: 1x RTX 3060
- Description: This model is trained on a dataset of conversations between a user and an AI assistant, with the goal of generating a coherent and relevant response to the user's input. It uses the GPT-2 architecture, a state-of-the-art transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The model is fine-tuned on the conversational data using maximum likelihood estimation, and is evaluated based on its ability to generate responses that are both grammatically correct and semantically relevant to the user's input. I've also trained larger models such as https://huggingface.co/Locutusque/gpt2-medium-conversational and https://huggingface.co/Locutusque/gpt2-large-conversational
## Intended Use
This model is intended to be used for generating conversational responses in a variety of contexts, such as chatbots, virtual assistants, and customer service applications. It is designed to provide natural and engaging responses to user input, with a focus on maintaining a consistent tone and style throughout the conversation. The model is suitable for use in both text-based and voice-based interfaces, and can be easily integrated into existing applications using the PyTorch and Transformers frameworks.
## Training Data
The model is trained on a large dataset of conversational data, consisting of interactions between users and an AI assistant. The data is preprocessed to remove any sensitive information and is formatted in a way that is suitable for training a language model. The training data is split into a training set and a validation set, with the training set used to update the model parameters and the validation set used to evaluate the model performance. The model was trained on 245,000 examples over 1,225,000 steps, it achieved decent metrics.
This model outperformed the base GPT-2 model significantly on a new conversational dataset during a fine-tuning session. Here is a side-by-side comparison of the two models during the first steps of training
```python
# Base GPT-2
"""
Epoch 1/5, Batch 1/10000: Loss - 64.9255, Reward - 260.0000, Penalty - 624.0000, BLEU - 0.0000
Epoch 1/5, Batch 2/10000: Loss - 57.4635, Reward - 303.0000, Penalty - 870.0000, BLEU - 0.0000
Epoch 1/5, Batch 3/10000: Loss - 67.8061, Reward - 295.0000, Penalty - 908.0000, BLEU - 0.0000
Epoch 1/5, Batch 4/10000: Loss - 59.6118, Reward - 800.0000, Penalty - 740.0000, BLEU - 0.0000
Epoch 1/5, Batch 5/10000: Loss - 67.4855, Reward - 402.0000, Penalty - 806.0000, BLEU - 0.0000
Epoch 1/5, Batch 6/10000: Loss - 29.3718, Reward - 937.0000, Penalty - 760.0000, BLEU - 0.0000
Epoch 1/5, Batch 7/10000: Loss - 79.0709, Reward - 390.0000, Penalty - 1114.0000, BLEU - 0.0000
Epoch 1/5, Batch 8/10000: Loss - 61.4583, Reward - 385.0000, Penalty - 760.0000, BLEU - 0.0000
Epoch 1/5, Batch 9/10000: Loss - 56.3084, Reward - 741.0000, Penalty - 560.0000, BLEU - 3.5500
Epoch 1/5, Batch 10/10000: Loss - 80.0192, Reward - 838.0000, Penalty - 1424.0000, BLEU - 0.0000
Epoch 1/5, Batch 11/10000: Loss - 51.8236, Reward - 228.0000, Penalty - 812.0000, BLEU - 0.0001
Epoch 1/5, Batch 12/10000: Loss - 71.4071, Reward - 541.0000, Penalty - 982.0000, BLEU - 0.0000
Epoch 1/5, Batch 13/10000: Loss - 33.3624, Reward - 910.0000, Penalty - 1002.0000, BLEU - 0.0027
Epoch 1/5, Batch 14/10000: Loss - 55.9721, Reward - 808.0000, Penalty - 798.0000, BLEU - 0.0005
Epoch 1/5, Batch 15/10000: Loss - 67.0336, Reward - 517.0000, Penalty - 764.0000, BLEU - 0.0000
"""
# Conversational GPT-2
"""
Epoch 1/5, Batch 1/10000: Loss - 6.1980, Reward - 887.0000, Penalty - 1500.0000, BLEU - 0.0648
Epoch 1/5, Batch 2/10000: Loss - 4.5750, Reward - 245.0000, Penalty - 1618.0000, BLEU - 0.0008
Epoch 1/5, Batch 3/10000: Loss - 5.1264, Reward - 600.0000, Penalty - 642.0000, BLEU - 5.7981
Epoch 1/5, Batch 4/10000: Loss - 0.2995, Reward - 1020.0000, Penalty - 74.0000, BLEU - 13.8469
Epoch 1/5, Batch 5/10000: Loss - 7.9377, Reward - 203.0000, Penalty - 1700.0000, BLEU - 0.3218
Epoch 1/5, Batch 6/10000: Loss - 5.0522, Reward - 1020.0000, Penalty - 2034.0000, BLEU - 0.1946
Epoch 1/5, Batch 7/10000: Loss - 2.0585, Reward - 925.0000, Penalty - 526.0000, BLEU - 16.1298
Epoch 1/5, Batch 8/10000: Loss - 5.9736, Reward - 1009.0000, Penalty - 1844.0000, BLEU - 0.0085
Epoch 1/5, Batch 9/10000: Loss - 6.0867, Reward - 245.0000, Penalty - 1690.0000, BLEU - 1.9342
Epoch 1/5, Batch 10/10000: Loss - 7.8497, Reward - 155.0000, Penalty - 1780.0000, BLEU - 0.0115
Epoch 1/5, Batch 11/10000: Loss - 3.8887, Reward - 1012.0000, Penalty - 2010.0000, BLEU - 0.6957
Epoch 1/5, Batch 12/10000: Loss - 6.6133, Reward - 216.0000, Penalty - 1638.0000, BLEU - 1.7853
Epoch 1/5, Batch 13/10000: Loss - 1.3319, Reward - 945.0000, Penalty - 374.0000, BLEU - 0.0075
Epoch 1/5, Batch 14/10000: Loss - 2.6296, Reward - 956.0000, Penalty - 414.0000, BLEU - 3.2207
Epoch 1/5, Batch 15/10000: Loss - 6.8827, Reward - 1013.0000, Penalty - 1970.0000, BLEU - 3.7418
"""
```
## Model Architecture
The model architecture used in this model is GPT-2, a transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The GPT-2 architecture consists of a multi-layered decoder-only transformer, with self-attention mechanisms that allow the model to capture long-term dependencies and generate coherent text.
## Evaluation Metrics
The model is evaluated based on several metrics, including loss, reward, penalty, BLEU score, and perplexity. The loss metric is calculated during training and reflects the difference between the predicted output and the actual output. The reward metric is based on the number of correct words generated by the model, while the penalty metric penalizes the model for repeating words consecutively. The BLEU score measures the similarity between the generated text and the ground truth text, while the perplexity metric measures how well the model is able to predict the next word in a sequence. During validation, the model achieved the following metrics:
- BLEU Score: 9
- Perplexity: 19
- Loss: 1.7
## Limitations and Bias
This model is not suitable for all use cases due to its limited training time on a weak computer. As a result, it may produce irrelevant or nonsensical responses. Additionally, it has not been fine-tuned to remember the chat history, is unable to provide follow-up responses, and it does not know the answer to many questions (it was only fine-tuned to respond in a conversational way). For optimal performance, we recommend using a GPU with at least 4GB of VRAM and downloading the model manually instead of using the Transformers library or deploying it on the Interface API. Here's how you should deploy the model:
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.add_special_tokens({'eos_token': '<|End|>'})
special_tokens = {
"additional_special_tokens": ["<|USER|>", "<|SYSTEM|>", "<|ASSISTANT|>"]
}
tokenizer.add_special_tokens(special_tokens)
model.resize_token_embeddings(len(tokenizer))
model.load_state_dict(torch.load("path/to/model"))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def generate_text(model, tokenizer, prompt, max_length=1024):
prompt = f'<|USER|> {prompt} <|ASSISTANT|> '
input_ids = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt").to(device)
attention_mask = torch.ones_like(input_ids).to(device)
output = model.generate(input_ids,
max_length=max_length,
do_sample=True,
top_k=35,
top_p=0.80,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
attention_mask=attention_mask)
output_ids = tokenizer.decode(output[0], skip_special_tokens=False)
assistant_token_index = output_ids.index('<|ASSISTANT|>') + len('<|ASSISTANT|>')
next_token_index = output_ids.find('<|', assistant_token_index)
output_ids = output_ids[assistant_token_index:next_token_index]
return output_ids
# Loop to interact with the model
while True:
prompt = input("Enter a prompt (or 'q' to quit): ")
if prompt == "q":
break
output_text = generate_text(model, tokenizer, prompt)
print(output_text)
```
## Deploying and training the model
The model has been fine-tuned on a specific input format that goes like this ```"<|USER|> {user prompt} <|ASSISTANT|> {model prediction} <|End|>".``` For the best performance from the model the input text should be as follows ```<|USER|> {user prompt} <|ASSISTANT|> ``` and the target/label should be as follows ```<|USER|> {user prompt} <|ASSISTANT|> {dataset output} <|End|>```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__gpt2-conversational-or-qa)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.09 |
| ARC (25-shot) | 21.42 |
| HellaSwag (10-shot) | 27.61 |
| MMLU (5-shot) | 26.51 |
| TruthfulQA (0-shot) | 47.31 |
| Winogrande (5-shot) | 51.14 |
| GSM8K (5-shot) | 0.08 |
| DROP (3-shot) | 1.55 |
|
Invitrace/I-live-well-foodai | Invitrace | 2024-06-06T02:15:41Z | 14 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-06-06T00:59:22Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: invitrace-ilivewell
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7232590808749749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# invitrace-ilivewell
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3020
- Accuracy: 0.7233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 5.0735 | 0.0803 | 200 | 5.0486 | 0.0462 |
| 4.947 | 0.1605 | 400 | 4.9254 | 0.1136 |
| 4.8134 | 0.2408 | 600 | 4.7931 | 0.2101 |
| 4.653 | 0.3210 | 800 | 4.6667 | 0.2133 |
| 4.5977 | 0.4013 | 1000 | 4.5528 | 0.2777 |
| 4.557 | 0.4815 | 1200 | 4.4413 | 0.2940 |
| 4.2948 | 0.5618 | 1400 | 4.3298 | 0.3233 |
| 4.2969 | 0.6421 | 1600 | 4.2270 | 0.3502 |
| 4.0779 | 0.7223 | 1800 | 4.1218 | 0.3743 |
| 4.0473 | 0.8026 | 2000 | 4.0337 | 0.3897 |
| 4.0347 | 0.8828 | 2200 | 3.9426 | 0.3879 |
| 3.9185 | 0.9631 | 2400 | 3.8602 | 0.4158 |
| 3.5221 | 1.0433 | 2600 | 3.7700 | 0.4275 |
| 3.5359 | 1.1236 | 2800 | 3.6728 | 0.4553 |
| 3.4622 | 1.2039 | 3000 | 3.5906 | 0.4541 |
| 3.431 | 1.2841 | 3200 | 3.5025 | 0.4732 |
| 3.3443 | 1.3644 | 3400 | 3.4168 | 0.4869 |
| 3.4344 | 1.4446 | 3600 | 3.3382 | 0.5025 |
| 3.249 | 1.5249 | 3800 | 3.2703 | 0.5160 |
| 3.2028 | 1.6051 | 4000 | 3.2017 | 0.4927 |
| 3.0678 | 1.6854 | 4200 | 3.1264 | 0.5152 |
| 3.0626 | 1.7657 | 4400 | 3.0487 | 0.5410 |
| 2.953 | 1.8459 | 4600 | 2.9699 | 0.5414 |
| 3.0011 | 1.9262 | 4800 | 2.9165 | 0.5503 |
| 2.9428 | 2.0064 | 5000 | 2.8433 | 0.5665 |
| 2.6717 | 2.0867 | 5200 | 2.7818 | 0.5569 |
| 2.6253 | 2.1669 | 5400 | 2.7195 | 0.5715 |
| 2.3998 | 2.2472 | 5600 | 2.6458 | 0.5874 |
| 2.2518 | 2.3274 | 5800 | 2.5901 | 0.5922 |
| 2.4229 | 2.4077 | 6000 | 2.5301 | 0.5912 |
| 2.592 | 2.4880 | 6200 | 2.4855 | 0.5984 |
| 2.0625 | 2.5682 | 6400 | 2.4277 | 0.6002 |
| 2.22 | 2.6485 | 6600 | 2.3784 | 0.6087 |
| 2.3326 | 2.7287 | 6800 | 2.3250 | 0.6123 |
| 2.1592 | 2.8090 | 7000 | 2.2958 | 0.6095 |
| 2.1491 | 2.8892 | 7200 | 2.2226 | 0.6303 |
| 2.0644 | 2.9695 | 7400 | 2.1923 | 0.6231 |
| 1.916 | 3.0498 | 7600 | 2.1510 | 0.6352 |
| 1.7435 | 3.1300 | 7800 | 2.0985 | 0.6388 |
| 1.761 | 3.2103 | 8000 | 2.0753 | 0.6404 |
| 1.5321 | 3.2905 | 8200 | 2.0396 | 0.6426 |
| 1.6117 | 3.3708 | 8400 | 1.9855 | 0.6530 |
| 1.5593 | 3.4510 | 8600 | 1.9805 | 0.6352 |
| 1.9288 | 3.5313 | 8800 | 1.9188 | 0.6564 |
| 1.5736 | 3.6116 | 9000 | 1.9141 | 0.6556 |
| 1.5544 | 3.6918 | 9200 | 1.8633 | 0.6619 |
| 1.3811 | 3.7721 | 9400 | 1.8466 | 0.6621 |
| 1.608 | 3.8523 | 9600 | 1.8116 | 0.6687 |
| 1.533 | 3.9326 | 9800 | 1.7784 | 0.6733 |
| 1.5496 | 4.0128 | 10000 | 1.7532 | 0.6755 |
| 1.3532 | 4.0931 | 10200 | 1.7399 | 0.6779 |
| 1.3787 | 4.1734 | 10400 | 1.6996 | 0.6795 |
| 1.4278 | 4.2536 | 10600 | 1.6893 | 0.6771 |
| 1.3531 | 4.3339 | 10800 | 1.6629 | 0.6759 |
| 1.2811 | 4.4141 | 11000 | 1.6493 | 0.6801 |
| 1.3787 | 4.4944 | 11200 | 1.6278 | 0.6855 |
| 1.2663 | 4.5746 | 11400 | 1.6101 | 0.6926 |
| 1.0892 | 4.6549 | 11600 | 1.5842 | 0.6887 |
| 1.3045 | 4.7352 | 11800 | 1.5758 | 0.6911 |
| 1.4239 | 4.8154 | 12000 | 1.5647 | 0.6930 |
| 1.065 | 4.8957 | 12200 | 1.5403 | 0.6905 |
| 1.1467 | 4.9759 | 12400 | 1.5257 | 0.6986 |
| 0.8755 | 5.0562 | 12600 | 1.5075 | 0.6964 |
| 1.0427 | 5.1364 | 12800 | 1.4977 | 0.7074 |
| 1.264 | 5.2167 | 13000 | 1.4951 | 0.6956 |
| 0.9822 | 5.2970 | 13200 | 1.4787 | 0.6990 |
| 1.1234 | 5.3772 | 13400 | 1.4673 | 0.7008 |
| 0.9394 | 5.4575 | 13600 | 1.4632 | 0.6998 |
| 0.9231 | 5.5377 | 13800 | 1.4346 | 0.7074 |
| 1.1829 | 5.6180 | 14000 | 1.4364 | 0.7092 |
| 0.9687 | 5.6982 | 14200 | 1.4231 | 0.7080 |
| 0.8915 | 5.7785 | 14400 | 1.4166 | 0.7104 |
| 1.013 | 5.8587 | 14600 | 1.4056 | 0.7110 |
| 1.0437 | 5.9390 | 14800 | 1.3840 | 0.7186 |
| 0.8936 | 6.0193 | 15000 | 1.3896 | 0.7142 |
| 0.8968 | 6.0995 | 15200 | 1.3853 | 0.7118 |
| 0.8978 | 6.1798 | 15400 | 1.3748 | 0.7154 |
| 0.8638 | 6.2600 | 15600 | 1.3686 | 0.7190 |
| 0.7187 | 6.3403 | 15800 | 1.3664 | 0.7186 |
| 0.7554 | 6.4205 | 16000 | 1.3672 | 0.7124 |
| 0.7664 | 6.5008 | 16200 | 1.3484 | 0.7192 |
| 0.9791 | 6.5811 | 16400 | 1.3500 | 0.7178 |
| 0.8325 | 6.6613 | 16600 | 1.3387 | 0.7184 |
| 1.0476 | 6.7416 | 16800 | 1.3390 | 0.7174 |
| 0.7053 | 6.8218 | 17000 | 1.3268 | 0.7217 |
| 0.9869 | 6.9021 | 17200 | 1.3270 | 0.7204 |
| 0.8179 | 6.9823 | 17400 | 1.3169 | 0.7297 |
| 0.9584 | 7.0626 | 17600 | 1.3119 | 0.7271 |
| 0.6394 | 7.1429 | 17800 | 1.3158 | 0.7243 |
| 0.9094 | 7.2231 | 18000 | 1.3056 | 0.7231 |
| 0.7837 | 7.3034 | 18200 | 1.3174 | 0.7239 |
| 0.7168 | 7.3836 | 18400 | 1.3088 | 0.7265 |
| 0.8603 | 7.4639 | 18600 | 1.3149 | 0.7204 |
| 0.6326 | 7.5441 | 18800 | 1.3041 | 0.7253 |
| 0.8656 | 7.6244 | 19000 | 1.3075 | 0.7253 |
| 0.7517 | 7.7047 | 19200 | 1.3181 | 0.7227 |
| 0.8719 | 7.7849 | 19400 | 1.2977 | 0.7273 |
| 0.6939 | 7.8652 | 19600 | 1.2965 | 0.7249 |
| 0.8371 | 7.9454 | 19800 | 1.3020 | 0.7233 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
khaled123/hf | khaled123 | 2024-06-06T02:15:30Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-06T02:09:15Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_3e-5_lora2 | tyzhu | 2024-06-06T02:15:00Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | 2024-06-05T12:07:26Z | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
metrics:
- accuracy
model-index:
- name: lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_3e-5_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
type: tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
metrics:
- name: Accuracy
type: accuracy
value: 0.5000634920634921
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_3e-5_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5856
- Accuracy: 0.5001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 1.7766 | 0.9998 | 1089 | 2.3352 | 0.5155 |
| 1.7291 | 1.9995 | 2178 | 2.3047 | 0.5177 |
| 1.6609 | 2.9993 | 3267 | 2.3113 | 0.5187 |
| 1.6108 | 4.0 | 4357 | 2.3487 | 0.518 |
| 1.5489 | 4.9998 | 5446 | 2.3843 | 0.5161 |
| 1.5093 | 5.9995 | 6535 | 2.4637 | 0.5138 |
| 1.4364 | 6.9993 | 7624 | 2.5600 | 0.5108 |
| 1.4038 | 8.0 | 8714 | 2.6166 | 0.5105 |
| 1.3439 | 8.9998 | 9803 | 2.7111 | 0.5092 |
| 1.2928 | 9.9995 | 10892 | 2.8851 | 0.5062 |
| 1.2284 | 10.9993 | 11981 | 2.9595 | 0.5042 |
| 1.2142 | 12.0 | 13071 | 3.0894 | 0.5034 |
| 1.1791 | 12.9998 | 14160 | 3.1358 | 0.5024 |
| 1.1272 | 13.9995 | 15249 | 3.2272 | 0.5017 |
| 1.0934 | 14.9993 | 16338 | 3.3488 | 0.5001 |
| 1.0586 | 16.0 | 17428 | 3.4072 | 0.5015 |
| 1.0415 | 16.9998 | 18517 | 3.4943 | 0.5009 |
| 1.011 | 17.9995 | 19606 | 3.5465 | 0.5001 |
| 1.0036 | 18.9993 | 20695 | 3.5320 | 0.5011 |
| 0.9595 | 19.9954 | 21780 | 3.5856 | 0.5001 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
henrymark/biobert-biomedical-ner | henrymark | 2024-06-06T02:12:03Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"Biomedical",
"en",
"dataset:nmitchko/i2b2-query-data-1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-06-04T00:07:42Z | ---
language: en
datasets: nmitchko/i2b2-query-data-1.0
license: apache-2.0
tags:
- token-classification
- NER
- Biomedical
---
BioBERT model pre-trained on nmitchko/i2b2-query-data-1.0 corpus
|
hdve/Qwen-Qwen1.5-1.8B-1717639775 | hdve | 2024-06-06T02:11:50Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-06T02:10:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shenmi175/sd-class-butterflies-32 | shenmi175 | 2024-06-06T02:10:36Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-06-06T02:05:59Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('shenmi175/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
cgihlstorf/finetuned_pythia70M_nondeduped_cp_14300032_1_0.0003_alternate | cgihlstorf | 2024-06-06T02:07:28Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"region:us"
] | null | 2024-06-06T02:07:05Z | ---
library_name: peft
base_model: EleutherAI/pythia-70m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
khaled123/chess-GGUF | khaled123 | 2024-06-06T02:03:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-03T13:31:17Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Franz2001/openai-whisper-large-v3-samromour_7epochs | Franz2001 | 2024-06-06T02:02:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T02:02:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coconana/Qwen-Qwen1.5-1.8B-1717638358 | coconana | 2024-06-06T02:01:48Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-06T01:45:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kroonen/stablelm-2-12b-GGUF | kroonen | 2024-06-06T02:01:30Z | 21 | 0 | null | [
"gguf",
"causal-lm",
"en",
"de",
"es",
"fr",
"it",
"nl",
"pt",
"dataset:tiiuae/falcon-refinedweb",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:uonlp/CulturaX",
"dataset:CarperAI/pilev2-dev",
"dataset:bigcode/starcoderdata",
"dataset:DataProvenanceInitiative/Commercially-Verified-Licenses",
"arxiv:2402.17834",
"arxiv:2104.09864",
"arxiv:2204.06745",
"arxiv:1607.06450",
"arxiv:2302.05442",
"arxiv:2309.14322",
"arxiv:2305.14201",
"arxiv:2101.00027",
"arxiv:2305.06161",
"arxiv:2309.09400",
"arxiv:2206.11147",
"arxiv:1910.02054",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T01:37:10Z | ---
language:
- en
- de
- es
- fr
- it
- nl
- pt
license: other
tags:
- causal-lm
datasets:
- tiiuae/falcon-refinedweb
- togethercomputer/RedPajama-Data-1T
- uonlp/CulturaX
- CarperAI/pilev2-dev
- bigcode/starcoderdata
- DataProvenanceInitiative/Commercially-Verified-Licenses
---
# `Stable LM 2 12B`
## Model Description
`Stable LM 2 12B` is a 12.1 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multilingual and code datasets for two epochs.
Please note: For commercial use, please refer to https://stability.ai/membership.
## Usage
**NOTE**: This model requires `transformers>=4.40.0`
Get started generating text with `Stable LM 2 12B` by using the following code snippet:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-12b")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablelm-2-12b",
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.70,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
### Run with Flash Attention 2 ⚡️
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-12b")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablelm-2-12b",
torch_dtype="auto",
attn_implementation="flash_attention_2",
)
model.cuda()
inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.70,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
</details>
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `Stable LM 2 12B` models are auto-regressive language models based on the transformer decoder architecture.
* **Language(s)**: English
* **Paper**: [Stable LM 2 Technical Report](https://arxiv.org/abs/2402.17834)
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: [Stability AI Non-Commercial Research Community License](https://huggingface.co/stabilityai/stablelm-2-12b/blob/main/LICENSE).
* **Commercial License**: to use this model commercially, please refer to https://stability.ai/membership
* **Contact**: For questions and comments about the model, please email `[email protected]`
### Model Architecture
The model is a decoder-only transformer with the following architecture:
| Parameters | Hidden Size | Layers | Heads | KV Heads | Sequence Length |
|----------------|-------------|--------|-------|----------|-----------------|
| 12,143,605,760 | 5120 | 40 | 32 | 8 | 4096 |
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
* **Parallel Layers**: Parallel attention and feed-forward residual layers with a single input LayerNorm ([Wang, 2021](https://github.com/kingoflolz/mesh-transformer-jax)).
* **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) without biases. Furthermore, we apply per-head QK normalization ([Dehghani et al., 2023](https://arxiv.org/abs/2302.05442), [Wortsman et al., 2023](https://arxiv.org/abs/2309.14322)).
* **Biases**: We remove all bias terms from the feed-forward networks and grouped-query self-attention layers.
* **Tokenizer**: We use Arcade100k, a BPE tokenizer extended from OpenAI's [`tiktoken.cl100k_base`](https://github.com/openai/tiktoken). We split digits into individual tokens following findings by [Liu & Low (2023)](https://arxiv.org/abs/2305.14201).
## Training
### Training Dataset
The dataset is comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), RedPajama-Data ([Together Computer., 2023](https://github.com/togethercomputer/RedPajama-Data)) and The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)) both without the *Books3* subset, and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)). We further supplement our training with multi-lingual data from CulturaX ([Nguyen et al., 2023](https://arxiv.org/abs/2309.09400)) and, in particular, from its OSCAR corpora, as well as restructured data in the style of [Yuan & Liu (2022)](https://arxiv.org/abs/2206.11147).
* Given the large amount of web data, we recommend fine-tuning the base `Stable LM 2 12B` for your downstream tasks.
### Training Procedure
The model is pre-trained on the aforementioned datasets in `bfloat16` precision, optimized with AdamW, and trained using the Arcade100k tokenizer with a vocabulary size of 100,352. We outline the complete hyperparameters choices in the project's [GitHub repository - config*](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-2-12b.yml).
### Training Infrastructure
* **Hardware**: `Stable LM 2 12B` was trained on the Stability AI cluster across 384 NVIDIA H100 GPUs (AWS P5 instances).
* **Software**: We use a fork of `gpt-neox` ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)), train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)), and rely on flash-attention as well as SwiGLU and Rotary Embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
## Use and Limitations
### Intended Use
The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications. For commercial use, please refer to https://stability.ai/membership.
### Limitations and Bias
As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
## How to Cite
```bibtex
@article{bellagente2024stable,
title={Stable LM 2 1.6 B Technical Report},
author={Bellagente, Marco and Tow, Jonathan and Mahan, Dakota and Phung, Duy and Zhuravinskyi, Maksym and Adithyan, Reshinth and Baicoianu, James and Brooks, Ben and Cooper, Nathan and Datta, Ashish and others},
journal={arXiv preprint arXiv:2402.17834},
year={2024}
}
```
|
mradermacher/MythoMist-7b-GGUF | mradermacher | 2024-06-06T02:00:59Z | 72 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Gryphe/MythoMist-7b",
"base_model:quantized:Gryphe/MythoMist-7b",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T20:27:44Z | ---
base_model: Gryphe/MythoMist-7b
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Gryphe/MythoMist-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MythoMist-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MythoMist-7b-GGUF/resolve/main/MythoMist-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
BEEMER330I/MentalHealthSupportChatbot | BEEMER330I | 2024-06-06T01:59:13Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-06-06T01:59:13Z | ---
license: cc-by-nc-4.0
---
|
v8karlo/UNCENSORED-Phi-3-mini-4k-geminified-Q4_K_M-GGUF | v8karlo | 2024-06-06T01:57:06Z | 201 | 5 | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:failspy/Phi-3-mini-4k-geminified",
"base_model:quantized:failspy/Phi-3-mini-4k-geminified",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-06-06T01:53:19Z | ---
language:
- multilingual
license: mit
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
base_model: failspy/Phi-3-mini-4k-geminified
license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: What's the difference between a banana and a strawberry?
---
# v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF
This model was converted to GGUF format from [`failspy/Phi-3-mini-4k-geminified`](https://huggingface.co/failspy/Phi-3-mini-4k-geminified) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/failspy/Phi-3-mini-4k-geminified) for more details on the model.
Convert Safetensors to GGUF .
https://huggingface.co/spaces/ggml-org/gguf-my-repo .

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF --hf-file phi-3-mini-4k-geminified-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF --hf-file phi-3-mini-4k-geminified-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF --hf-file phi-3-mini-4k-geminified-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo v8karlo/Phi-3-mini-4k-geminified-Q4_K_M-GGUF --hf-file phi-3-mini-4k-geminified-q4_k_m.gguf -c 2048
```
|
wwe180/Llama3-18B-lingyang-v1-Q6_K-GGUF | wwe180 | 2024-06-06T01:40:52Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"Llama3",
"llama-cpp",
"gguf-my-repo",
"base_model:wwe180/Llama3-18B-lingyang-v1",
"base_model:quantized:wwe180/Llama3-18B-lingyang-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-06T01:40:11Z | ---
license:
- other
library_name: transformers
tags:
- mergekit
- merge
- Llama3
- llama-cpp
- gguf-my-repo
base_model: wwe180/Llama3-18B-lingyang-v1
---
# wwe180/Llama3-18B-lingyang-v1-Q6_K-GGUF
This model was converted to GGUF format from [`wwe180/Llama3-18B-lingyang-v1`](https://huggingface.co/wwe180/Llama3-18B-lingyang-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wwe180/Llama3-18B-lingyang-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo wwe180/Llama3-18B-lingyang-v1-Q6_K-GGUF --hf-file llama3-18b-lingyang-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo wwe180/Llama3-18B-lingyang-v1-Q6_K-GGUF --hf-file llama3-18b-lingyang-v1-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo wwe180/Llama3-18B-lingyang-v1-Q6_K-GGUF --hf-file llama3-18b-lingyang-v1-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo wwe180/Llama3-18B-lingyang-v1-Q6_K-GGUF --hf-file llama3-18b-lingyang-v1-q6_k.gguf -c 2048
```
|
v8karlo/UNCENSORED-Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF | v8karlo | 2024-06-06T01:37:01Z | 29 | 4 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"base_model:RESMPDEV/Wukong-Phi-3-Instruct-Ablated",
"base_model:quantized:RESMPDEV/Wukong-Phi-3-Instruct-Ablated",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-06T01:03:38Z | ---
language:
- en
license: mit
tags:
- llama-cpp
- gguf-my-repo
base_model: RESMPDEV/Wukong-Phi-3-Instruct-Ablated
datasets:
- cognitivecomputations/Dolphin-2.9
uncensored:
- yes
---
# v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF
UNCENSORED Phi-3 model.

This model was converted to GGUF format from [`RESMPDEV/Wukong-Phi-3-Instruct-Ablated`](https://huggingface.co/RESMPDEV/Wukong-Phi-3-Instruct-Ablated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/RESMPDEV/Wukong-Phi-3-Instruct-Ablated) for more details on the model.
Convert Safetensors to GGUF . https://huggingface.co/spaces/ggml-org/gguf-my-repo .
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/662c3116277765660783ca6d/qPHdaxOccIFcpmcewfa9r.mp4"></video>
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF --hf-file wukong-phi-3-instruct-ablated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF --hf-file wukong-phi-3-instruct-ablated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF --hf-file wukong-phi-3-instruct-ablated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo v8karlo/Wukong-Phi-3-Instruct-Ablated-Q4_K_M-GGUF --hf-file wukong-phi-3-instruct-ablated-q4_k_m.gguf -c 2048
```
|
wwe180/Llama3-18B-lingyang-v1-Q4_K_M-GGUF | wwe180 | 2024-06-06T01:34:08Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"Llama3",
"llama-cpp",
"gguf-my-repo",
"base_model:wwe180/Llama3-18B-lingyang-v1",
"base_model:quantized:wwe180/Llama3-18B-lingyang-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-06T01:33:39Z | ---
license:
- other
library_name: transformers
tags:
- mergekit
- merge
- Llama3
- llama-cpp
- gguf-my-repo
base_model: wwe180/Llama3-18B-lingyang-v1
---
# wwe180/Llama3-18B-lingyang-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`wwe180/Llama3-18B-lingyang-v1`](https://huggingface.co/wwe180/Llama3-18B-lingyang-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wwe180/Llama3-18B-lingyang-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo wwe180/Llama3-18B-lingyang-v1-Q4_K_M-GGUF --hf-file llama3-18b-lingyang-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo wwe180/Llama3-18B-lingyang-v1-Q4_K_M-GGUF --hf-file llama3-18b-lingyang-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo wwe180/Llama3-18B-lingyang-v1-Q4_K_M-GGUF --hf-file llama3-18b-lingyang-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo wwe180/Llama3-18B-lingyang-v1-Q4_K_M-GGUF --hf-file llama3-18b-lingyang-v1-q4_k_m.gguf -c 2048
```
|
bella05/pogny-16-0.005 | bella05 | 2024-06-06T01:34:00Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-05T04:16:02Z | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: pogny-16-0.005
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pogny-16-0.005
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6843
- Accuracy: 0.4376
- F1: 0.2665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 2.2723 | 1.0 | 4818 | 2.0798 | 0.4376 | 0.2665 |
| 1.9852 | 2.0 | 9636 | 2.1720 | 0.0643 | 0.0078 |
| 1.7066 | 3.0 | 14454 | 1.6843 | 0.4376 | 0.2665 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0a0+b5021ba
- Datasets 2.6.2
- Tokenizers 0.14.1
|
mradermacher/Tiamat-7b-1.1-DPO-GGUF | mradermacher | 2024-06-06T01:33:11Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Gryphe/Tiamat-7b-1.1-DPO",
"base_model:quantized:Gryphe/Tiamat-7b-1.1-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-05T18:47:31Z | ---
base_model: Gryphe/Tiamat-7b-1.1-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Gryphe/Tiamat-7b-1.1-DPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Tiamat-7b-1.1-DPO-GGUF/resolve/main/Tiamat-7b-1.1-DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aksquarry/Model1 | aksquarry | 2024-06-06T01:30:37Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-06-06T01:28:06Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/hug
gingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wwe180/L3-8B-LingYang-v1-Q4_K_M-GGUF | wwe180 | 2024-06-06T01:26:57Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"Llama3",
"llama-cpp",
"gguf-my-repo",
"base_model:wwe180/L3-8B-LingYang-v1",
"base_model:quantized:wwe180/L3-8B-LingYang-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-06T01:15:34Z | ---
license:
- other
library_name: transformers
tags:
- mergekit
- merge
- Llama3
- llama-cpp
- gguf-my-repo
base_model: wwe180/L3-8B-LingYang-v1
---
# wwe180/L3-8B-LingYang-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`wwe180/L3-8B-LingYang-v1`](https://huggingface.co/wwe180/L3-8B-LingYang-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/wwe180/L3-8B-LingYang-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo wwe180/L3-8B-LingYang-v1-Q4_K_M-GGUF --hf-file l3-8b-lingyang-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo wwe180/L3-8B-LingYang-v1-Q4_K_M-GGUF --hf-file l3-8b-lingyang-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo wwe180/L3-8B-LingYang-v1-Q4_K_M-GGUF --hf-file l3-8b-lingyang-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo wwe180/L3-8B-LingYang-v1-Q4_K_M-GGUF --hf-file l3-8b-lingyang-v1-q4_k_m.gguf -c 2048
```
|
wwe180/Llama3-18B-lingyang-v1 | wwe180 | 2024-06-06T01:25:36Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Llama3",
"conversational",
"base_model:wwe180/Llama3-18B-lingyang-v1",
"base_model:finetune:wwe180/Llama3-18B-lingyang-v1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T23:47:21Z | ---
base_model:
- wwe180/Llama3-18B-lingyang-v1
library_name: transformers
tags:
- mergekit
- merge
- Llama3
license:
- other
---
# After simple testing, the effect is good, stronger than llama-3-8b!
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Llama3-18B-lingyang-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
```
## Statement:
Llama3-18B-lingyang-v1 does not represent the views and positions of the model developers We will not be liable for any problems arising from the use of the Llama3-18B-lingyang-v1 open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
|
phongtintruong/misjava-api-060524-api-extend-test-NEFTune-duplicate | phongtintruong | 2024-06-06T01:22:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T01:21:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3_Qwen_Qwen1.5-4B_5e-5_lora2 | tyzhu | 2024-06-06T01:17:38Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | 2024-06-05T20:43:32Z | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3
metrics:
- accuracy
model-index:
- name: lmind_nq_train6000_eval6489_v1_docidx_v3_Qwen_Qwen1.5-4B_5e-5_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3
type: tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3
metrics:
- name: Accuracy
type: accuracy
value: 0.42728205128205127
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_nq_train6000_eval6489_v1_docidx_v3_Qwen_Qwen1.5-4B_5e-5_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0355
- Accuracy: 0.4273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.9626 | 0.9985 | 341 | 2.9919 | 0.4727 |
| 1.9158 | 2.0 | 683 | 2.9864 | 0.4737 |
| 1.8622 | 2.9985 | 1024 | 3.0420 | 0.4710 |
| 1.786 | 4.0 | 1366 | 3.1527 | 0.4662 |
| 1.7019 | 4.9985 | 1707 | 3.3819 | 0.4634 |
| 1.6036 | 6.0 | 2049 | 3.4969 | 0.4589 |
| 1.5175 | 6.9985 | 2390 | 3.6412 | 0.4577 |
| 1.4007 | 8.0 | 2732 | 3.8310 | 0.4537 |
| 1.326 | 8.9985 | 3073 | 3.9177 | 0.4487 |
| 1.231 | 10.0 | 3415 | 4.0665 | 0.4451 |
| 1.1298 | 10.9985 | 3756 | 4.1773 | 0.44 |
| 1.0276 | 12.0 | 4098 | 4.2875 | 0.4378 |
| 0.9525 | 12.9985 | 4439 | 4.4273 | 0.4352 |
| 0.8616 | 14.0 | 4781 | 4.4484 | 0.4324 |
| 0.7799 | 14.9985 | 5122 | 4.6228 | 0.4313 |
| 0.7084 | 16.0 | 5464 | 4.7239 | 0.4303 |
| 0.6478 | 16.9985 | 5805 | 4.8167 | 0.4310 |
| 0.5862 | 18.0 | 6147 | 4.8510 | 0.4303 |
| 0.5189 | 18.9985 | 6488 | 4.9265 | 0.4243 |
| 0.4767 | 19.9707 | 6820 | 5.0355 | 0.4273 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa_Qwen_Qwen1.5-4B_3e-4_lora2 | tyzhu | 2024-06-06T01:13:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | 2024-06-05T21:49:02Z | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa
metrics:
- accuracy
model-index:
- name: lmind_hotpot_train8000_eval7405_v1_reciteonly_qa_Qwen_Qwen1.5-4B_3e-4_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa
type: tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa
metrics:
- name: Accuracy
type: accuracy
value: 0.6608966521106259
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_hotpot_train8000_eval7405_v1_reciteonly_qa_Qwen_Qwen1.5-4B_3e-4_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7813
- Accuracy: 0.6609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4488 | 1.0 | 250 | 1.4958 | 0.6770 |
| 1.3142 | 2.0 | 500 | 1.5007 | 0.6772 |
| 1.1176 | 3.0 | 750 | 1.5507 | 0.6756 |
| 0.9253 | 4.0 | 1000 | 1.6442 | 0.6728 |
| 0.7213 | 5.0 | 1250 | 1.7736 | 0.6701 |
| 0.5718 | 6.0 | 1500 | 1.8863 | 0.6682 |
| 0.4232 | 7.0 | 1750 | 2.0245 | 0.6660 |
| 0.3334 | 8.0 | 2000 | 2.1773 | 0.6642 |
| 0.2433 | 9.0 | 2250 | 2.2681 | 0.6632 |
| 0.2076 | 10.0 | 2500 | 2.3732 | 0.6629 |
| 0.1632 | 11.0 | 2750 | 2.4368 | 0.6623 |
| 0.1491 | 12.0 | 3000 | 2.5182 | 0.6617 |
| 0.1275 | 13.0 | 3250 | 2.5680 | 0.6619 |
| 0.1273 | 14.0 | 3500 | 2.6412 | 0.6613 |
| 0.1129 | 15.0 | 3750 | 2.6497 | 0.6617 |
| 0.1129 | 16.0 | 4000 | 2.6932 | 0.6614 |
| 0.102 | 17.0 | 4250 | 2.7003 | 0.6612 |
| 0.1109 | 18.0 | 4500 | 2.7033 | 0.6614 |
| 0.0997 | 19.0 | 4750 | 2.7139 | 0.6613 |
| 0.1012 | 20.0 | 5000 | 2.7813 | 0.6609 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LyliaEngine/zPDXL2 | LyliaEngine | 2024-06-06T01:13:19Z | 33 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:LyliaEngine/Pony_Diffusion_V6_XL",
"base_model:adapter:LyliaEngine/Pony_Diffusion_V6_XL",
"license:cdla-permissive-2.0",
"region:us"
] | text-to-image | 2024-06-06T01:12:17Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
(zPDXL2), from below 1girl, 1boy, sitting heads together, on a tropical
beach, smiling, wearing shirt and shorts, morning sunlight, tiki hut, palm
trees, cloudy sky
parameters:
negative_prompt: (zPDXL2-neg), headband, jewelry, nsfw, nude
output:
url: images/372295-239849650-1-recondiff_v10.jpeg
- text: >-
(zPDXL2), 1girl, standing on the bridge of a spaceship wearing a uniform,
from side from front, looking into space, wires, cables, monitor, controls,
panel
parameters:
negative_prompt: (zPDXL2-neg), cleavage, breasts, nsfw, nude
output:
url: images/372249-535364377-1-zavyfantasiaxlPDXL_v10.jpeg
base_model: LyliaEngine/Pony_Diffusion_V6_XL
instance_prompt: zPDXL2, zPDXL2-neg
license: cdla-permissive-2.0
---
# zPDXL2
<Gallery />
## Model description
No need to use score this, and score that. Just use the embeddings instead.
Be sure to download both positive and negative embeddings and use both.
A set of quality enchancing embeddings for Pony SDXL, and other Pony-Adjacent models. You can mix and match any of the embeddings that you feel you need. They don't duplicate each other, so feel free to use as many together as needed.
High Quality V2: designed to give higher quality results and remove censoring.
XXX Rating: tells the Pony model of your choice to allow NSFW content.
PG Rating: tells the Pony model of your choice to try and remove NSFW content for safe images.
Photo Real: tells the Pony model of your choice to use realistic or near realistic renders instead of cartoon or anime.
Doesn't work with non-Pony models. Check your model.
Recommend using both positive and negative embeddings together at strength 1.0 to 2.0. If you can't use both, the Positive version should be take priority.
## Source
https://civitai.com/models/332646?modelVersionId=509253
## Credit
https://civitai.com/user/Zovya
## Trigger words
You should use `zPDXL2` to trigger the image generation.
You should use `zPDXL2-neg` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LyliaEngine/zPDXL2/tree/main) them in the Files & versions tab.
|
Azazelle/L3-Decent-Peter-Griffin-8B | Azazelle | 2024-06-06T00:58:19Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B",
"base_model:merge:ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"base_model:elyn-dev/Llama-3-Soliloquy-8B-v2",
"base_model:merge:elyn-dev/Llama-3-Soliloquy-8B-v2",
"base_model:failspy/Llama-3-8B-Instruct-MopeyMule",
"base_model:merge:failspy/Llama-3-8B-Instruct-MopeyMule",
"base_model:maldv/badger-kappa-llama-3-8b",
"base_model:merge:maldv/badger-kappa-llama-3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-06T00:50:33Z | ---
base_model:
- Nitral-AI/Poppy_Porpoise-1.4-L3-8B
- maldv/badger-kappa-llama-3-8b
- Sao10K/L3-8B-Stheno-v3.2
- openlynn/Llama-3-Soliloquy-8B-v2
- failspy/Llama-3-8B-Instruct-MopeyMule
- Hastagaras/Jamet-8B-L3-MK.II
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the breadcrumbs_ties merge method using [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) as a base.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.4-L3-8B)
* [maldv/badger-kappa-llama-3-8b](https://huggingface.co/maldv/badger-kappa-llama-3-8b)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
* [Hastagaras/Jamet-8B-L3-MK.II](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.II)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: failspy/Llama-3-8B-Instruct-MopeyMule
- model: maldv/badger-kappa-llama-3-8b # 7/10
parameters:
density: 0.4
weight: 0.14
- model: Nitral-AI/Poppy_Porpoise-1.4-L3-8B # 7/10
parameters:
density: 0.5
weight: 0.18
- model: openlynn/Llama-3-Soliloquy-8B-v2 # 8/10
parameters:
density: 0.5
weight: 0.18
- model: Hastagaras/Jamet-8B-L3-MK.II # 6/10
parameters:
density: 0.3
weight: 0.1
- model: Sao10K/L3-8B-Stheno-v3.2 # 9/10
parameters:
density: 0.6
weight: 0.23
merge_method: breadcrumbs_ties
base_model: failspy/Llama-3-8B-Instruct-MopeyMule
parameters:
normalize: false
rescale: true
gamma: 0.01
dtype: float16
```
|
TTTXXX01/Ber_shift-zephyr-7b-sft-full | TTTXXX01 | 2024-06-06T00:49:04Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T20:40:14Z | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: Ber_shift-zephyr-7b-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ber_shift-zephyr-7b-sft-full
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
fayez94/whisper-tiny-minds14 | fayez94 | 2024-06-06T00:41:56Z | 78 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-06-05T21:39:43Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.32113341204250295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5499
- Wer Ortho: 0.3331
- Wer: 0.3211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|
| 0.028 | 4.4643 | 500 | 0.5499 | 0.3331 | 0.3211 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf | RichardErkhov | 2024-06-06T00:24:40Z | 55 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-06T00:13:33Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-alpaca-gpt4 - GGUF
- Model creator: https://huggingface.co/vicgalle/
- Original model: https://huggingface.co/vicgalle/gpt2-alpaca-gpt4/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gpt2-alpaca-gpt4.Q2_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q2_K.gguf) | Q2_K | 0.08GB |
| [gpt2-alpaca-gpt4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gpt2-alpaca-gpt4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gpt2-alpaca-gpt4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gpt2-alpaca-gpt4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gpt2-alpaca-gpt4.Q3_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q3_K.gguf) | Q3_K | 0.09GB |
| [gpt2-alpaca-gpt4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gpt2-alpaca-gpt4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [gpt2-alpaca-gpt4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [gpt2-alpaca-gpt4.Q4_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gpt2-alpaca-gpt4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gpt2-alpaca-gpt4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gpt2-alpaca-gpt4.Q4_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q4_K.gguf) | Q4_K | 0.11GB |
| [gpt2-alpaca-gpt4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [gpt2-alpaca-gpt4.Q4_1.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q4_1.gguf) | Q4_1 | 0.11GB |
| [gpt2-alpaca-gpt4.Q5_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gpt2-alpaca-gpt4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gpt2-alpaca-gpt4.Q5_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q5_K.gguf) | Q5_K | 0.12GB |
| [gpt2-alpaca-gpt4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gpt2-alpaca-gpt4.Q5_1.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gpt2-alpaca-gpt4.Q6_K.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q6_K.gguf) | Q6_K | 0.13GB |
| [gpt2-alpaca-gpt4.Q8_0.gguf](https://huggingface.co/RichardErkhov/vicgalle_-_gpt2-alpaca-gpt4-gguf/blob/main/gpt2-alpaca-gpt4.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
license: mit
datasets:
- vicgalle/alpaca-gpt4
language:
- en
tags:
- instruction-tuning
- alpaca
- gpt4
---
# GPT-2 finetuned with Alpaca-GPT4
### Examples
The following samples were generated with `{"temperature" : 0.7, "top_p" : 0.92, "top_k" : 0}`:
##### **Give three tips for a good meal**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Give three tips for a good meal.
### Response:
1. **Serve fresh, locally grown produce:** Fresh produce is a great way to eat healthy and to enjoy a healthy meal. Try to choose a variety of fruits, vegetables, and whole grains, and experiment with the variety to find the perfect balance. Make sure to bring plenty of fresh fruits, vegetables, and whole grains throughout the meal.
2. **Strive for a quick and easy meal:** The key is to make a meal that is quick and easy to prepare. Take a break from the food, grab some snacks, and enjoy a delicious meal. Make sure to cook the ingredients beforehand, and avoid overcooking or filling the food up.
3. **Determine the time and place:** If you're eating a lot of food, it can be tempting to make a time and place for it. This will help you stay organized and get your ingredients in the right order. Make sure to set up a timer and avoid rushing to the grocery store. If possible, make sure to follow the recipe for your specific dish and avoid leaving the refrigerator unattended for too long.
```
##### **Write a poem about a delicious night**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a poem about a delicious night.
### Response:
As the sky above me fluttered with the gentle breeze,
The moon was in its soft, gentle embrace,
The night was full of cheer and joy,
A perfect night's night.
The sun shone bright, the stars twinkled,
The colors were pure and true,
As if in the presence of a master.
But the night was not without its troubles.
A stormy night, it rained heavily,
A dark and stormy night, it raged with fire.
But one thing was certain, it was in the air,
The scent of the sweet scent of a sweet sweet night.
The night was filled with joy, with a sense of peace and satisfaction.
The stars twinkled, the moon was in its soft, gentle embrace,
The night was full of cheer and joy,
A perfect night's night.
The night was full of laughter, with the gentle breeze,
The colors were pure and true,
As if in the presence of a master.
But the night was full of worries, with the fierce tempest.
But the night was filled with hope, with a sense of hope that could not be ignored.
```
##### **Write a tweet describing your capabilities**
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a tweet describing your capabilities.
### Response:
"Empowering your creativity, skills, and abilities through my technology-driven approach. My team of professionals will help you achieve your goals, stay ahead of the curve, and contribute to your success. #Diversity"
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__gpt2-alpaca-gpt4)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.98 |
| ARC (25-shot) | 22.61 |
| HellaSwag (10-shot) | 31.17 |
| MMLU (5-shot) | 25.76 |
| TruthfulQA (0-shot) | 38.04 |
| Winogrande (5-shot) | 52.17 |
| GSM8K (5-shot) | 0.3 |
| DROP (3-shot) | 4.83 |
|
LaferriereJC/Phi-3-mini-4k-instruct-FOL-pyreft | LaferriereJC | 2024-06-06T00:19:58Z | 4 | 0 | transformers | [
"transformers",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-06-04T23:11:51Z | ---
license: mit
---
code used to train
- https://gist.github.com/thistleknot/398466a4bd75a1315825d7f04db635a6/91194e69c5220e536f45536e6b7dd66995c847b1
currently setup to translate individual premises (sentences)
#datasets
- Yale-LILY/FOLIO
- apergo-ai/text2log (1661 records)
how to load
```
device = "cuda"
model_name_or_path = "microsoft/Phi-3-mini-4k-instruct"
model = transformers.AutoModelForCausalLM.from_pretrained(
model_name_or_path, torch_dtype=torch.bfloat16, device_map=device)
reft_model = pyreft.ReftModel.load(
"LaferriereJC/Phi-3-mini-4k-instruct-FOL-pyreft", model
)
```
how to use
```
!git clone https://huggingface.co/LaferriereJC/Phi-3-mini-4k-instruct-FOL-pyreft
from transformers import AutoModelForCausalLM
import torch
import pyreft
import os
import transformers
device = 'cuda'
model_name_or_path = "microsoft/Phi-3-mini-4k-instruct"
attn_implementation = "eager"
torch_dtype = torch.float16
#"microsoft/Phi-3-mini-4k-instruct"
model = transformers.AutoModelForCausalLM.from_pretrained(
model_name_or_path, torch_dtype=torch.bfloat16, device_map=device,trust_remote_code=True)
# Define the PyReFT configuration
layers = range(model.config.num_hidden_layers)
representations = [{
"component": f"model.layers[{l}].output",
"intervention": pyreft.LoreftIntervention(
embed_dim=model.config.hidden_size,
low_rank_dimension=16
)
} for l in layers]
reft_config = pyreft.ReftConfig(representations=representations)
# Initialize the PyReFT model
reft_model = pyreft.get_reft_model(model, reft_config)
# Load the saved PyReFT model
local_directory = "./Phi-3-mini-4k-instruct-FOL-pyreft"
interventions = {}
for l in layers:
component = f"model.layers[{l}].output"
file_path = os.path.join(local_directory, f"intkey_comp.{component}.unit.pos.nunit.1#0.bin")
if os.path.exists(file_path):
with open(file_path, "rb") as f:
adjusted_key = f"comp.{component}.unit.pos.nunit.1#0"
interventions[adjusted_key] = torch.load(f)
# Apply the loaded weights to the model
for component, state_dict in interventions.items():
if component in reft_model.interventions:
reft_model.interventions[component][0].load_state_dict(state_dict)
else:
print(f"Key mismatch: {component} not found in reft_model.interventions")
# Set the device to CUDA
reft_model.set_device("cuda")
# Verify the model
reft_model.print_trainable_parameters()
#model.half()
# get tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_name_or_path, model_max_length=216,
padding_side="right", use_fast=True,
attn_implementation=attn_implementation
#, add_eos_token=True, add_bos_token=True
)
tokenizer.pad_token = tokenizer.eos_token
# position info about the interventions
share_weights = True # whether the prefix and suffix interventions sharing weights.
positions="f3+l3" # the intervening positions of prefix tokens (f[irst]1) and suffix tokens (l[ast]1).
first_n, last_n = pyreft.parse_positions(positions)
terminators = [
tokenizer.eos_token_id,
]
prompt_no_input_template = """\n<|user|>:%s</s>\n<|assistant|>:"""
test_instruction = f"""tell me something I don't know"""
# tokenize and prepare the input
prompt = prompt_no_input_template % test_instruction
prompt = tokenizer(prompt, return_tensors="pt").to(device)
unit_locations = torch.IntTensor([pyreft.get_intervention_locations(
last_position=prompt["input_ids"].shape[-1],
first_n=first_n,
last_n=last_n,
pad_mode="last",
num_interventions=len(reft_config.representations),
share_weights=share_weights
)]).permute(1, 0, 2).tolist()
_, reft_response = reft_model.generate(
prompt, unit_locations={"sources->base": (None, unit_locations)},
intervene_on_prompt=True, max_new_tokens=537, do_sample=True, top_k=50,temperature=0.7,
eos_token_id=terminators, early_stopping=True
)
print(tokenizer.decode(reft_response[0], skip_special_tokens=True))
```
response
```
:tell me something I don't know</s> :exists x1.(_thing(x1) & _donknow(x1))
```
training settings
```
per_device_train_batch_size=6,
logging_steps=1,
optim='paged_lion_8bit',
gradient_checkpointing_kwargs={"use_reentrant": False},
learning_rate=0.0003,
warmup_ratio=.1,
adam_beta2=0.95,
adam_epsilon=0.00001,
save_strategy='epoch',
max_grad_norm=1.0,
lr_scheduler_type='cosine',
```
Evaluation:
I kept tweaking the model until I got confirmations from chatgpt 4, but the final training error (1 epoch) came in consistently under .5 (10 point EMA with alpha of .42)
Loss

:tell me something I don't know</s> :exists x1.(_thing(x1) & _donknow(x1))
Does the fol expression fit?
Depending on how I asked (for example, it would almost always suggest revisions if I asked
- Is the fol expression adequate?
- How faithful is the fol expression?
)
|
hfl/llama-3-chinese-8b-instruct-v3 | hfl | 2024-06-06T00:19:44Z | 539 | 62 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T02:58:21Z | ---
license: apache-2.0
language:
- zh
- en
base_model: [hfl/Llama-3-Chinese-8B-Instruct, hfl/Llama-3-Chinese-8B-Instruct-v2, meta-llama/Meta-Llama-3-8B-Instruct]
---
# Llama-3-Chinese-8B-Instruct-v3
<p align="center">
<a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a>
</p>
[**[👉👉👉 Chat with Llama-3-Chinese-8B-Instruct-v3 @ HF Space]**](https://huggingface.co/spaces/hfl-rc/llama-3-chinese-8b-instruct-demo)
This repository contains **Llama-3-Chinese-8B-Instruct-v3**, which is further finetuned based on a mix of the following models:
- [hfl/Llama-3-Chinese-8B-Instruct](https://huggingface.co/hfl/llama-3-chinese-8b-instruct)
- [hfl/Llama-3-Chinese-8B-Instruct-v2](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2)
- [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
**Note: This is an instruction (chat) model, which can be used for conversation, QA, etc.**
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
## Others
- For GGUF model (llama.cpp compatible), please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v3-gguf
- If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 |
powermove72/Stealth-FusionGrit-7b-Slerp-Exp-Q4_K_M-GGUF | powermove72 | 2024-06-06T00:18:21Z | 1 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"jan-hq/stealth-v1.3",
"powermove72/FusionGrit-7B",
"llama-cpp",
"gguf-my-repo",
"base_model:powermove72/Stealth-FusionGrit-7b-Slerp-Exp",
"base_model:quantized:powermove72/Stealth-FusionGrit-7b-Slerp-Exp",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-06T00:18:05Z | ---
tags:
- merge
- mergekit
- lazymergekit
- jan-hq/stealth-v1.3
- powermove72/FusionGrit-7B
- llama-cpp
- gguf-my-repo
base_model: powermove72/Stealth-FusionGrit-7b-Slerp-Exp
---
# powermove72/Stealth-FusionGrit-7b-Slerp-Exp-Q4_K_M-GGUF
This model was converted to GGUF format from [`powermove72/Stealth-FusionGrit-7b-Slerp-Exp`](https://huggingface.co/powermove72/Stealth-FusionGrit-7b-Slerp-Exp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/powermove72/Stealth-FusionGrit-7b-Slerp-Exp) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo powermove72/Stealth-FusionGrit-7b-Slerp-Exp-Q4_K_M-GGUF --hf-file stealth-fusiongrit-7b-slerp-exp-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo powermove72/Stealth-FusionGrit-7b-Slerp-Exp-Q4_K_M-GGUF --hf-file stealth-fusiongrit-7b-slerp-exp-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo powermove72/Stealth-FusionGrit-7b-Slerp-Exp-Q4_K_M-GGUF --hf-file stealth-fusiongrit-7b-slerp-exp-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo powermove72/Stealth-FusionGrit-7b-Slerp-Exp-Q4_K_M-GGUF --hf-file stealth-fusiongrit-7b-slerp-exp-q4_k_m.gguf -c 2048
```
|
tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3_Qwen_Qwen1.5-4B_lora2 | tyzhu | 2024-06-06T00:18:03Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | 2024-06-04T13:59:21Z | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3
metrics:
- accuracy
model-index:
- name: lmind_nq_train6000_eval6489_v1_docidx_v3_Qwen_Qwen1.5-4B_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3
type: tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3
metrics:
- name: Accuracy
type: accuracy
value: 0.4286153846153846
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_nq_train6000_eval6489_v1_docidx_v3_Qwen_Qwen1.5-4B_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_nq_train6000_eval6489_v1_docidx_v3 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3392
- Accuracy: 0.4286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-------:|:----:|:--------:|:---------------:|
| 1.9569 | 0.9985 | 341 | 0.4736 | 3.0300 |
| 1.8799 | 2.0 | 683 | 0.468 | 3.0993 |
| 1.7649 | 2.9985 | 1024 | 0.4650 | 3.2750 |
| 1.6077 | 4.0 | 1366 | 0.4625 | 3.4406 |
| 1.4321 | 4.9985 | 1707 | 0.4586 | 3.6500 |
| 1.2382 | 6.0 | 2049 | 0.4562 | 3.8598 |
| 1.0525 | 6.9985 | 2390 | 0.4541 | 4.0638 |
| 0.8607 | 8.0 | 2732 | 0.4515 | 4.2389 |
| 0.7099 | 8.9985 | 3073 | 0.4516 | 4.3484 |
| 0.5823 | 9.9854 | 3410 | 0.4488 | 4.5794 |
| 0.4641 | 10.9985 | 3751 | 4.7090 | 0.4495 |
| 0.3755 | 12.0 | 4093 | 4.9454 | 0.4354 |
| 0.3235 | 12.9985 | 4434 | 5.0624 | 0.4379 |
| 0.2691 | 14.0 | 4776 | 5.0957 | 0.4345 |
| 0.2394 | 14.9985 | 5117 | 5.1831 | 0.4368 |
| 0.2112 | 16.0 | 5459 | 5.3223 | 0.4326 |
| 0.1994 | 16.9985 | 5800 | 5.3839 | 0.4301 |
| 0.1834 | 18.0 | 6142 | 5.4236 | 0.4286 |
| 0.1709 | 18.9985 | 6483 | 5.4840 | 0.4291 |
| 0.166 | 19.9854 | 6820 | 5.3392 | 0.4286 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hfl/llama-3-chinese-8b-instruct-v3-gguf | hfl | 2024-06-06T00:17:51Z | 2,112 | 72 | null | [
"gguf",
"zh",
"en",
"base_model:hfl/llama-3-chinese-8b-instruct-v3",
"base_model:quantized:hfl/llama-3-chinese-8b-instruct-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T02:59:33Z | ---
license: apache-2.0
language:
- zh
- en
base_model:
- hfl/llama-3-chinese-8b-instruct-v3
---
# Llama-3-Chinese-8B-Instruct-v3-GGUF
<p align="center">
<a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a>
</p>
[**[👉👉👉 Chat with Llama-3-Chinese-8B-Instruct-v3 @ HF Space]**](https://huggingface.co/spaces/hfl-rc/llama-3-chinese-8b-instruct-demo)
This repository contains **Llama-3-Chinese-8B-Instruct-v3-GGUF** (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of [Llama-3-Chinese-8B-Instruct-v3](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v3).
**Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.**
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
## Performance
Metric: PPL, lower is better
*Note: Unless constrained by memory, we suggest using Q8_0 or Q6_K for better performance.*
| Quant | Size | PPL |
| :---: | -------: | ------------------: |
| Q2_K | 2.96 GB | 10.0534 +/- 0.13135 |
| Q3_K | 3.74 GB | 6.3295 +/- 0.07816 |
| Q4_0 | 4.34 GB | 6.3200 +/- 0.07893 |
| Q4_K | 4.58 GB | 6.0042 +/- 0.07431 |
| Q5_0 | 5.21 GB | 6.0437 +/- 0.07526 |
| Q5_K | 5.34 GB | 5.9484 +/- 0.07399 |
| Q6_K | 6.14 GB | 5.9469 +/- 0.07404 |
| Q8_0 | 7.95 GB | 5.8933 +/- 0.07305 |
| F16 | 14.97 GB | 5.8902 +/- 0.07303 |
## Others
- For full model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v3
- If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 |
RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf | RichardErkhov | 2024-06-06T00:10:06Z | 15 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T21:23:18Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
HelpSteer-filtered-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/HelpSteer-filtered-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [HelpSteer-filtered-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [HelpSteer-filtered-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [HelpSteer-filtered-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [HelpSteer-filtered-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [HelpSteer-filtered-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [HelpSteer-filtered-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [HelpSteer-filtered-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [HelpSteer-filtered-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [HelpSteer-filtered-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [HelpSteer-filtered-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [HelpSteer-filtered-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [HelpSteer-filtered-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [HelpSteer-filtered-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [HelpSteer-filtered-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [HelpSteer-filtered-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [HelpSteer-filtered-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [HelpSteer-filtered-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [HelpSteer-filtered-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [HelpSteer-filtered-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [HelpSteer-filtered-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [HelpSteer-filtered-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [HelpSteer-filtered-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_HelpSteer-filtered-7B-gguf/blob/main/HelpSteer-filtered-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-4.0
datasets:
- Weyaxi/HelpSteer-filtered
language:
- en
tags:
- mistral
- instruct
---

# HelpSteer-filtered-7B
Original weights of [HelpSteer-filtered-7B](https://huggingface.co/Weyaxi/HelpSteer-filtered-7B). Finetuned from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Lora Weights
You can access lora weights from here:
[Weyaxi/HelpSteer-filtered-7B-Lora](https://huggingface.co/Weyaxi/HelpSteer-filtered-7B-Lora)
|
tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa_Qwen_Qwen1.5-4B_5e-4_lora2 | tyzhu | 2024-06-06T00:09:21Z | 5 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | 2024-06-05T09:47:45Z | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa
metrics:
- accuracy
model-index:
- name: lmind_hotpot_train8000_eval7405_v1_recite_qa_Qwen_Qwen1.5-4B_5e-4_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa
type: tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa
metrics:
- name: Accuracy
type: accuracy
value: 0.7611848617176128
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_hotpot_train8000_eval7405_v1_recite_qa_Qwen_Qwen1.5-4B_5e-4_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5757
- Accuracy: 0.7612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 1.5463 | 0.9998 | 1089 | 1.3539 | 0.6872 |
| 1.3199 | 1.9995 | 2178 | 1.1632 | 0.7022 |
| 1.1039 | 2.9993 | 3267 | 1.0347 | 0.7134 |
| 0.9356 | 4.0 | 4357 | 0.9234 | 0.7237 |
| 0.8312 | 4.9998 | 5446 | 0.8529 | 0.7307 |
| 0.7565 | 5.9995 | 6535 | 0.7860 | 0.7372 |
| 0.6985 | 6.9993 | 7624 | 0.7415 | 0.7415 |
| 0.6623 | 8.0 | 8714 | 0.7111 | 0.7457 |
| 0.6281 | 8.9998 | 9803 | 0.6775 | 0.7481 |
| 0.5885 | 9.9995 | 10892 | 0.6689 | 0.7496 |
| 0.5721 | 10.9993 | 11981 | 0.6364 | 0.7530 |
| 0.5504 | 12.0 | 13071 | 0.6319 | 0.7541 |
| 0.5406 | 12.9998 | 14160 | 0.6185 | 0.7549 |
| 0.536 | 13.9995 | 15249 | 0.6158 | 0.7565 |
| 0.5205 | 14.9993 | 16338 | 0.5976 | 0.7578 |
| 0.5175 | 16.0 | 17428 | 0.5922 | 0.7590 |
| 0.5068 | 16.9998 | 18517 | 0.5823 | 0.7593 |
| 0.5023 | 17.9995 | 19606 | 0.5754 | 0.7607 |
| 0.4848 | 18.9993 | 20695 | 0.5781 | 0.7608 |
| 0.4767 | 19.9954 | 21780 | 0.5757 | 0.7612 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
TDN-M/ViSD-TDNM | TDN-M | 2024-06-06T00:08:22Z | 3 | 0 | diffusers | [
"diffusers",
"vietnam",
"sd",
"text-to-image",
"arxiv:1910.09700",
"doi:10.57967/hf/2321",
"license:mit",
"region:us"
] | text-to-image | 2024-05-27T07:12:44Z | ---
license: mit
pipeline_tag: text-to-image
tags:
- vietnam
- sd
metrics:
- accuracy
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shkna1368/mawlawi | shkna1368 | 2024-06-05T23:57:15Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-05T23:55:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JoshuaAAX/beto-finetuned-ner | JoshuaAAX | 2024-06-05T23:53:18Z | 9 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2002",
"base_model:NazaGara/NER-fine-tuned-BETO",
"base_model:finetune:NazaGara/NER-fine-tuned-BETO",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-25T04:34:27Z | ---
license: cc-by-4.0
base_model: NazaGara/NER-fine-tuned-BETO
tags:
- generated_from_trainer
datasets:
- conll2002
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: beto-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2002
type: conll2002
config: es
split: validation
args: es
metrics:
- name: Precision
type: precision
value: 0.8402527075812274
- name: Recall
type: recall
value: 0.8556985294117647
- name: F1
type: f1
value: 0.8479052823315117
- name: Accuracy
type: accuracy
value: 0.9701834862385321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-finetuned-ner
This model is a fine-tuned version of [NazaGara/NER-fine-tuned-BETO](https://huggingface.co/NazaGara/NER-fine-tuned-BETO) on the conll2002 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2248
- Precision: 0.8403
- Recall: 0.8557
- F1: 0.8479
- Accuracy: 0.9702
## Model description
Este modelo está basado en BETO, que es un modelo de lenguaje preentrenado para el español similar a BERT. BETO fue entrenado inicialmente en grandes cantidades de texto en español.
Posteriormente, este modelo toma la arquitectura y pesos preentrenados de BETO y los ajusta aún más en la tarea específica de Reconocimiento de Entidades Nombradas (NER) utilizando el conjunto de datos conll2002.
Este modelo ajustado puede usarse para anotar automáticamente nuevos textos en español, asignando etiquetas de entidad nombradas.
## How to Use
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("JoshuaAAX/beto-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("JoshuaAAX/beto-finetuned-ner")
text = "La Federación nacional de cafeteros de Colombia es una entidad del estado. El primer presidente el Dr Augusto Guerra contó con el aval de la Asociación Colombiana de Aviación."
ner_pipeline= pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="max")
ner_pipeline(text)
```
## Training data
| Abbreviation | Description |
|:-------------:|:-------------:|
| O | Outside of NE |
| PER | Person’s name |
| ORG | Organization |
| LOC | Location |
| MISC | Miscellaneous |
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0512 | 1.0 | 521 | 0.1314 | 0.8328 | 0.8562 | 0.8443 | 0.9703 |
| 0.0305 | 2.0 | 1042 | 0.1549 | 0.8318 | 0.8442 | 0.8380 | 0.9688 |
| 0.0193 | 3.0 | 1563 | 0.1498 | 0.8513 | 0.8578 | 0.8545 | 0.9708 |
| 0.0148 | 4.0 | 2084 | 0.1810 | 0.8363 | 0.8442 | 0.8403 | 0.9682 |
| 0.0112 | 5.0 | 2605 | 0.1904 | 0.8412 | 0.8529 | 0.8470 | 0.9703 |
| 0.0078 | 6.0 | 3126 | 0.1831 | 0.8364 | 0.8539 | 0.8450 | 0.9708 |
| 0.0058 | 7.0 | 3647 | 0.2060 | 0.8419 | 0.8543 | 0.8481 | 0.9701 |
| 0.0049 | 8.0 | 4168 | 0.2111 | 0.8357 | 0.8541 | 0.8448 | 0.9697 |
| 0.0037 | 9.0 | 4689 | 0.2255 | 0.8371 | 0.8504 | 0.8437 | 0.9692 |
| 0.0031 | 10.0 | 5210 | 0.2248 | 0.8403 | 0.8557 | 0.8479 | 0.9702 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
McLuian/FT-Mistral-7b-Instruct-GSM8K-100.Q4_0.gguf | McLuian | 2024-06-05T23:39:16Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-05T08:13:10Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** LuianMC
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phongtintruong/misjava-api-060524-api-extend-test-merged-4bit | phongtintruong | 2024-06-05T23:38:33Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-05T23:36:27Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Azazelle/SlimMelodicMaid | Azazelle | 2024-06-05T23:38:01Z | 1,530 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-30T00:58:39Z | ---
license: cc-by-4.0
tags:
- mistral
- merge
pipeline_tag: text-generation
model-index:
- name: SlimMelodicMaid
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/SlimMelodicMaid
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/SlimMelodicMaid
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/SlimMelodicMaid
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.88
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/SlimMelodicMaid
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/SlimMelodicMaid
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/SlimMelodicMaid
name: Open LLM Leaderboard
---
# Model Card for xDAN-SlimOrca
<!-- Provide a quick summary of what the model is/does. -->
Slerp merge of Silicon-Maid-7B, piano-medley-7b, xDAN-L1-Chat-RL-v1, and mistral-7b-slimorcaboros.
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: Azazelle/Silicon-Medley
layer_range: [0, 32]
- model: Azazelle/xDAN-SlimOrca
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0.19, 0.59, 0.43, 0.76, 1]
- filter: mlp
value: [0.81, 0.41, 0.57, 0.24, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Azazelle__SlimMelodicMaid)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.70|
|AI2 Reasoning Challenge (25-Shot)|67.15|
|HellaSwag (10-Shot) |86.01|
|MMLU (5-Shot) |64.75|
|TruthfulQA (0-shot) |60.88|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |60.80|
|
thdangtr/blip_recipe1m_ingredients_v4_2 | thdangtr | 2024-06-05T23:36:54Z | 67 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"visual-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2024-06-05T23:36:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
uirev/MLX_OpenELM_1.1b_UiRev_finetuned_V3 | uirev | 2024-06-05T23:32:33Z | 6 | 0 | mlx | [
"mlx",
"safetensors",
"openelm",
"trl",
"sft",
"generated_from_trainer",
"custom_code",
"base_model:apple/OpenELM-1_1B-Instruct",
"base_model:finetune:apple/OpenELM-1_1B-Instruct",
"license:other",
"region:us"
] | null | 2024-06-05T23:31:43Z | ---
license: other
tags:
- trl
- sft
- generated_from_trainer
- mlx
base_model: apple/OpenELM-1_1B-Instruct
model-index:
- name: OpenELM_1.1b_UiRev_finetuned_V2
results: []
---
# Paramstr/MLX_OpenELM_1.1b_UiRev_finetuned_V3
The Model [Paramstr/MLX_OpenELM_1.1b_UiRev_finetuned_V3](https://huggingface.co/Paramstr/MLX_OpenELM_1.1b_UiRev_finetuned_V3) was converted to MLX format from [Paramstr/OpenELM_1.1b_UiRev_finetuned_V2](https://huggingface.co/Paramstr/OpenELM_1.1b_UiRev_finetuned_V2) using mlx-lm version **0.14.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Paramstr/MLX_OpenELM_1.1b_UiRev_finetuned_V3")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
mrovejaxd/FNST_trad_l | mrovejaxd | 2024-06-05T23:21:49Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-05T20:49:33Z | ---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: FNST_trad_l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FNST_trad_l
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0024
- Accuracy: 0.5983
- F1: 0.5955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 42
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.127 | 1.0 | 2000 | 1.1060 | 0.5133 | 0.4590 |
| 0.9786 | 2.0 | 4000 | 1.0075 | 0.5642 | 0.5524 |
| 0.9072 | 3.0 | 6000 | 0.9958 | 0.5733 | 0.5697 |
| 0.869 | 4.0 | 8000 | 0.9776 | 0.5917 | 0.5957 |
| 0.8243 | 5.0 | 10000 | 0.9760 | 0.5817 | 0.5860 |
| 0.7659 | 6.0 | 12000 | 0.9826 | 0.595 | 0.5993 |
| 0.7414 | 7.0 | 14000 | 1.0055 | 0.5933 | 0.6001 |
| 0.7023 | 8.0 | 16000 | 1.0113 | 0.5908 | 0.5959 |
| 0.6745 | 9.0 | 18000 | 1.0527 | 0.5933 | 0.5949 |
| 0.6161 | 10.0 | 20000 | 1.1227 | 0.5883 | 0.5920 |
| 0.5863 | 11.0 | 22000 | 1.1571 | 0.5883 | 0.5896 |
| 0.5406 | 12.0 | 24000 | 1.1883 | 0.5908 | 0.5954 |
| 0.5185 | 13.0 | 26000 | 1.2686 | 0.5917 | 0.5957 |
| 0.4796 | 14.0 | 28000 | 1.3313 | 0.5992 | 0.6067 |
| 0.4379 | 15.0 | 30000 | 1.4234 | 0.595 | 0.5970 |
| 0.3883 | 16.0 | 32000 | 1.5582 | 0.5958 | 0.5994 |
| 0.3934 | 17.0 | 34000 | 1.6591 | 0.595 | 0.6012 |
| 0.359 | 18.0 | 36000 | 1.8129 | 0.595 | 0.6011 |
| 0.3249 | 19.0 | 38000 | 1.9811 | 0.5917 | 0.5966 |
| 0.2954 | 20.0 | 40000 | 2.1860 | 0.5858 | 0.5901 |
| 0.3064 | 21.0 | 42000 | 2.2548 | 0.5858 | 0.5904 |
| 0.2844 | 22.0 | 44000 | 2.3557 | 0.6 | 0.6045 |
| 0.2471 | 23.0 | 46000 | 2.5137 | 0.6017 | 0.6033 |
| 0.2432 | 24.0 | 48000 | 2.6458 | 0.5992 | 0.6035 |
| 0.2247 | 25.0 | 50000 | 2.8667 | 0.5983 | 0.6026 |
| 0.213 | 26.0 | 52000 | 2.8895 | 0.6042 | 0.6088 |
| 0.1792 | 27.0 | 54000 | 3.0338 | 0.6008 | 0.6062 |
| 0.1723 | 28.0 | 56000 | 3.1234 | 0.5975 | 0.6008 |
| 0.1562 | 29.0 | 58000 | 3.2822 | 0.5942 | 0.5992 |
| 0.1437 | 30.0 | 60000 | 3.3156 | 0.6067 | 0.6096 |
| 0.151 | 31.0 | 62000 | 3.3923 | 0.6075 | 0.6098 |
| 0.1446 | 32.0 | 64000 | 3.4562 | 0.6058 | 0.6084 |
| 0.109 | 33.0 | 66000 | 3.6100 | 0.6017 | 0.6038 |
| 0.1138 | 34.0 | 68000 | 3.6468 | 0.6083 | 0.6078 |
| 0.1068 | 35.0 | 70000 | 3.6758 | 0.5967 | 0.6004 |
| 0.1133 | 36.0 | 72000 | 3.7340 | 0.6033 | 0.6025 |
| 0.0865 | 37.0 | 74000 | 3.7666 | 0.6083 | 0.6079 |
| 0.0936 | 38.0 | 76000 | 3.8650 | 0.6058 | 0.6073 |
| 0.0957 | 39.0 | 78000 | 3.9209 | 0.605 | 0.6025 |
| 0.1027 | 40.0 | 80000 | 3.9211 | 0.6058 | 0.6054 |
| 0.0828 | 41.0 | 82000 | 4.0124 | 0.6058 | 0.6009 |
| 0.0742 | 42.0 | 84000 | 4.0024 | 0.5983 | 0.5955 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Franz2001/openai-whisper-large-v3-talc_4epochs | Franz2001 | 2024-06-05T23:21:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T23:21:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MarPla/SocialSciencePegasusLargeModel | MarPla | 2024-06-05T23:18:51Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-05T20:57:05Z | ---
base_model: google/pegasus-large
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: SocialSciencePegasusLargeModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SocialSciencePegasusLargeModel
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7391
- Rouge1: 43.2515
- Rouge2: 13.5819
- Rougel: 29.2476
- Rougelsum: 39.2268
- Bertscore Precision: 76.5154
- Bertscore Recall: 81.3593
- Bertscore F1: 78.854
- Bleu: 0.1036
- Gen Len: 191.3589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore Precision | Bertscore Recall | Bertscore F1 | Bleu | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------------------:|:----------------:|:------------:|:------:|:--------:|
| 6.1745 | 0.3943 | 300 | 5.9613 | 40.1903 | 12.4753 | 28.1708 | 36.7059 | 75.8626 | 80.8932 | 78.2884 | 0.0959 | 191.3589 |
| 5.8826 | 0.7885 | 600 | 5.7391 | 43.2515 | 13.5819 | 29.2476 | 39.2268 | 76.5154 | 81.3593 | 78.854 | 0.1036 | 191.3589 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.2.1
- Tokenizers 0.19.1
|
Skorcht/yandere-ai-her | Skorcht | 2024-06-05T23:18:46Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"mistral",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T22:50:42Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fine-tuned/deepspeed-from-new-new-docker | fine-tuned | 2024-06-05T23:17:10Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Query",
"Document",
"Argument",
"Counter",
"Boost",
"fr",
"en",
"dataset:fine-tuned/deepspeed-from-new-new-docker",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-06-05T23:16:36Z | ---
license: apache-2.0
datasets:
- fine-tuned/deepspeed-from-new-new-docker
- allenai/c4
language:
- fr
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Query
- Document
- Argument
- Counter
- Boost
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
information retrieval system
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/deepspeed-from-new-new-docker',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf | RichardErkhov | 2024-06-05T23:16:53Z | 17 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T16:12:22Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Noromaid-20b-v0.1.1 - GGUF
- Model creator: https://huggingface.co/NeverSleep/
- Original model: https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Noromaid-20b-v0.1.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q2_K.gguf) | Q2_K | 6.91GB |
| [Noromaid-20b-v0.1.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.IQ3_XS.gguf) | IQ3_XS | 7.63GB |
| [Noromaid-20b-v0.1.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.IQ3_S.gguf) | IQ3_S | 8.06GB |
| [Noromaid-20b-v0.1.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q3_K_S.gguf) | Q3_K_S | 8.06GB |
| [Noromaid-20b-v0.1.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.IQ3_M.gguf) | IQ3_M | 8.53GB |
| [Noromaid-20b-v0.1.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q3_K.gguf) | Q3_K | 9.04GB |
| [Noromaid-20b-v0.1.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q3_K_M.gguf) | Q3_K_M | 9.04GB |
| [Noromaid-20b-v0.1.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q3_K_L.gguf) | Q3_K_L | 9.9GB |
| [Noromaid-20b-v0.1.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.IQ4_XS.gguf) | IQ4_XS | 10.01GB |
| [Noromaid-20b-v0.1.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q4_0.gguf) | Q4_0 | 10.52GB |
| [Noromaid-20b-v0.1.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.IQ4_NL.gguf) | IQ4_NL | 2.7GB |
| [Noromaid-20b-v0.1.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q4_K_S.gguf) | Q4_K_S | 1.74GB |
| [Noromaid-20b-v0.1.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q4_K.gguf) | Q4_K | 3.33GB |
| [Noromaid-20b-v0.1.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q4_K_M.gguf) | Q4_K_M | 2.91GB |
| [Noromaid-20b-v0.1.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q4_1.gguf) | Q4_1 | 11.67GB |
| [Noromaid-20b-v0.1.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q5_0.gguf) | Q5_0 | 12.01GB |
| [Noromaid-20b-v0.1.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q5_K_S.gguf) | Q5_K_S | 5.59GB |
| [Noromaid-20b-v0.1.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q5_K.gguf) | Q5_K | 3.14GB |
| [Noromaid-20b-v0.1.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q5_K_M.gguf) | Q5_K_M | 13.18GB |
| [Noromaid-20b-v0.1.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q5_1.gguf) | Q5_1 | 13.98GB |
| [Noromaid-20b-v0.1.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q6_K.gguf) | Q6_K | 15.28GB |
| [Noromaid-20b-v0.1.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Noromaid-20b-v0.1.1-gguf/blob/main/Noromaid-20b-v0.1.1.Q8_0.gguf) | Q8_0 | 19.79GB |
Original model description:
---
license: cc-by-nc-4.0
---

---
# Disclaimer:
## This is a ***TEST*** version, don't expect everything to work!!!
You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)**
---
# This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)!
Tired of the same merges everytime? Here it is, the Noromaid-20b-v0.1.1 model. Suitable for RP, ERP and general stuff.
[Recommended settings - No settings yet(Please suggest some over in the Community tab!)]
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains fp16 files of Noromaid-20b-v0.1.1.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Custom format:
UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
OLD SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Training data used:
- [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output.
- [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it!
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
NaniDAO/ie-llama-3-8b-bnb-4bit-float-16 | NaniDAO | 2024-06-05T23:15:52Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T23:10:47Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** NaniDAO
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
inafi/finetuned-llama-synthetic-problems-merged | inafi | 2024-06-05T23:07:21Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T02:43:59Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b
---
# Uploaded model
- **Developed by:** inafi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BuroIdentidadDigital/Ine_Frontal_v4 | BuroIdentidadDigital | 2024-06-05T23:02:53Z | 56 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-06-05T22:13:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tyzhu/lmind_nq_train6000_eval6489_v1_recite_qa_v3_Qwen_Qwen1.5-4B_5e-5_lora2 | tyzhu | 2024-06-05T23:02:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_nq_train6000_eval6489_v1_recite_qa_v3",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | 2024-06-05T16:13:47Z | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_nq_train6000_eval6489_v1_recite_qa_v3
metrics:
- accuracy
model-index:
- name: lmind_nq_train6000_eval6489_v1_recite_qa_v3_Qwen_Qwen1.5-4B_5e-5_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_nq_train6000_eval6489_v1_recite_qa_v3
type: tyzhu/lmind_nq_train6000_eval6489_v1_recite_qa_v3
metrics:
- name: Accuracy
type: accuracy
value: 0.7753632286995515
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_nq_train6000_eval6489_v1_recite_qa_v3_Qwen_Qwen1.5-4B_5e-5_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_nq_train6000_eval6489_v1_recite_qa_v3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5804
- Accuracy: 0.7754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.8478 | 1.0 | 529 | 1.6699 | 0.6080 |
| 1.7862 | 2.0 | 1058 | 1.6003 | 0.6164 |
| 1.6531 | 3.0 | 1587 | 1.5363 | 0.6251 |
| 1.5515 | 4.0 | 2116 | 1.4608 | 0.6343 |
| 1.4038 | 5.0 | 2645 | 1.3876 | 0.6456 |
| 1.2751 | 6.0 | 3174 | 1.3186 | 0.6553 |
| 1.1475 | 7.0 | 3703 | 1.2514 | 0.6637 |
| 1.0282 | 8.0 | 4232 | 1.1740 | 0.676 |
| 0.9067 | 9.0 | 4761 | 1.1004 | 0.6870 |
| 0.8202 | 10.0 | 5290 | 1.0408 | 0.6964 |
| 0.7007 | 11.0 | 5819 | 0.9592 | 0.7084 |
| 0.6259 | 12.0 | 6348 | 0.8998 | 0.7191 |
| 0.553 | 13.0 | 6877 | 0.8332 | 0.7295 |
| 0.4948 | 14.0 | 7406 | 0.7799 | 0.7387 |
| 0.4221 | 15.0 | 7935 | 0.7330 | 0.7466 |
| 0.3911 | 16.0 | 8464 | 0.6805 | 0.7551 |
| 0.3377 | 17.0 | 8993 | 0.6475 | 0.7620 |
| 0.3179 | 18.0 | 9522 | 0.6195 | 0.7680 |
| 0.288 | 19.0 | 10051 | 0.5962 | 0.7723 |
| 0.2605 | 20.0 | 10580 | 0.5804 | 0.7754 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
xinchen9/SD_Offense | xinchen9 | 2024-06-05T23:00:54Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-05T23:00:54Z | ---
title: SD Offense
emoji: 📈
colorFrom: yellow
colorTo: indigo
sdk: gradio
sdk_version: 4.33.0
app_file: app.py
pinned: false
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
iskhare/model_out | iskhare | 2024-06-05T22:59:00Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-06-05T22:49:19Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
base_model: stabilityai/stable-diffusion-2-1-base
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-iskhare/model_out
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Peppizza/NLP_A2_alt | Peppizza | 2024-06-05T22:55:35Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"base_model:OpenAssistant/reward-model-deberta-v3-large-v2",
"base_model:finetune:OpenAssistant/reward-model-deberta-v3-large-v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-06-05T22:54:41Z | ---
license: mit
base_model: OpenAssistant/reward-model-deberta-v3-large-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: NLP_A2_alt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP_A2_alt
This model is a fine-tuned version of [OpenAssistant/reward-model-deberta-v3-large-v2](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3513
- Accuracy: 0.8834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9988 | 419 | 0.4063 | 0.8767 |
| 0.0925 | 1.9976 | 838 | 0.3513 | 0.8834 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
tiiuae/visper | tiiuae | 2024-06-05T22:37:35Z | 0 | 9 | null | [
"en",
"es",
"fr",
"ar",
"zh",
"dataset:tiiuae/visper",
"license:cc-by-nc-2.0",
"region:us"
] | null | 2024-05-26T12:22:21Z | ---
datasets:
- tiiuae/visper
language:
- en
- es
- fr
- ar
- zh
inference: false
license: cc-by-nc-2.0
metrics:
- wer
---
# ViSpeR: Multilingual Audio-Visual Speech Recognition
ViSPer is a model for audio visual speech recognition (VSR/AVSR). Trained on 5500 hours of labelled video data.
# Training details:
We use our proposed dataset to train a encoder-decoder model in a fully-supervised manner under a multi-lingual setting. While the encoder size is 12 layers, the decoder size is 6 layers. The hidden size, MLP and number of heads are set to 768, 3072 and 12, respectively. The unigram tokenizers are learned for all languages combined and have a vocabulary size of 21k.
The models are trained for 150 epochs on 64 Nvidia A100 GPUs (40GB) using AdamW optimizer with max LR of 1e-3 and a weight decay of 0.1. A cosine scheduler with a warm-up of 5 epochs is used for training. The maximum batch size per GPU is set to 1800 video frames.
# Performance:
We provide the results of the model on our proposed benchmarks in this table:
| Language | VSR (WER/CER) | AVSR (WER/CER) |
|----------|---------------|----------------|
| French | 29.8 | 5.7 |
| Spanish | 39.4 | 4.4 |
| Arabic | 47.8 | 8.4 |
| Chinese | 51.3 (CER) | 15.4 (CER) |
| English | 49.1 | 8.1 |
# Broader impact:
In essence, while we hope that ViSPer will open the doors for new research questions and opportunities, and should only be used for this purpose. There are also potential dual use concerns that come with releasing ViSPer (dataset and models), trained on a substantial corpus of multilingual video data. While the technology behind ViSPer offers significant advances in multimodal speech recognition, it should only be used for research purposes.
## ViSpeR paper coming soon
## Check our VSR related works
```bash
@inproceedings{djilali2023lip2vec,
title={Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping},
author={Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and Boussaid, Haithem and Almazrouei, Ebtessam and Debbah, Merouane},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={13790--13801},
year={2023}
}
@inproceedings{djilali2024vsr,
title={Do VSR Models Generalize Beyond LRS3?},
author={Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and LeBihan, Eustache and Boussaid, Haithem and Almazrouei, Ebtesam and Debbah, Merouane},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={6635--6644},
year={2024}
}
``` |
camilomj/youngblooddrums | camilomj | 2024-06-05T22:34:28Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-05T22:33:32Z | ---
license: apache-2.0
---
|
aerdna/quantize_v0.2 | aerdna | 2024-06-05T22:32:13Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-05T22:28:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AprendeIngenia/vehicle_plate_detection | AprendeIngenia | 2024-06-05T22:18:04Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-06-05T22:03:59Z | ---
license: apache-2.0
---
|
bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF | bartowski | 2024-06-05T22:10:08Z | 2,220 | 10 | transformers | [
"transformers",
"gguf",
"code",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2024-06-05T19:09:00Z | ---
library_name: transformers
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
tags:
- code
language:
- code
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Codestral-22B-v0.1-abliterated-v3
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3086">b3086</a> for quantization.
Original model: https://huggingface.co/failspy/Codestral-22B-v0.1-abliterated-v3
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
No chat template specified so default is used. This may be incorrect, check original model card for details.
```
<s>[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt}[/INST] </s>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Codestral-22B-v0.1-abliterated-v3-Q8_0.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q8_0.gguf) | Q8_0 | 23.64GB | Extremely high quality, generally unneeded but max available quant. |
| [Codestral-22B-v0.1-abliterated-v3-Q6_K.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q6_K.gguf) | Q6_K | 18.25GB | Very high quality, near perfect, *recommended*. |
| [Codestral-22B-v0.1-abliterated-v3-Q5_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q5_K_M.gguf) | Q5_K_M | 15.72GB | High quality, *recommended*. |
| [Codestral-22B-v0.1-abliterated-v3-Q5_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q5_K_S.gguf) | Q5_K_S | 15.32GB | High quality, *recommended*. |
| [Codestral-22B-v0.1-abliterated-v3-Q4_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q4_K_M.gguf) | Q4_K_M | 13.34GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Codestral-22B-v0.1-abliterated-v3-Q4_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q4_K_S.gguf) | Q4_K_S | 12.66GB | Slightly lower quality with more space savings, *recommended*. |
| [Codestral-22B-v0.1-abliterated-v3-IQ4_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ4_XS.gguf) | IQ4_XS | 11.93GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Codestral-22B-v0.1-abliterated-v3-Q3_K_L.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q3_K_L.gguf) | Q3_K_L | 11.73GB | Lower quality but usable, good for low RAM availability. |
| [Codestral-22B-v0.1-abliterated-v3-Q3_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q3_K_M.gguf) | Q3_K_M | 10.75GB | Even lower quality. |
| [Codestral-22B-v0.1-abliterated-v3-IQ3_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ3_M.gguf) | IQ3_M | 10.06GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Codestral-22B-v0.1-abliterated-v3-Q3_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q3_K_S.gguf) | Q3_K_S | 9.64GB | Low quality, not recommended. |
| [Codestral-22B-v0.1-abliterated-v3-IQ3_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ3_XS.gguf) | IQ3_XS | 9.17GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Codestral-22B-v0.1-abliterated-v3-IQ3_XXS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ3_XXS.gguf) | IQ3_XXS | 8.59GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Codestral-22B-v0.1-abliterated-v3-Q2_K.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q2_K.gguf) | Q2_K | 8.27GB | Very low quality but surprisingly usable. |
| [Codestral-22B-v0.1-abliterated-v3-IQ2_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ2_M.gguf) | IQ2_M | 7.61GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Codestral-22B-v0.1-abliterated-v3-IQ2_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ2_S.gguf) | IQ2_S | 7.03GB | Very low quality, uses SOTA techniques to be usable. |
| [Codestral-22B-v0.1-abliterated-v3-IQ2_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ2_XS.gguf) | IQ2_XS | 6.64GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF --include "Codestral-22B-v0.1-abliterated-v3-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF --include "Codestral-22B-v0.1-abliterated-v3-Q8_0.gguf/*" --local-dir Codestral-22B-v0.1-abliterated-v3-Q8_0
```
You can either specify a new local-dir (Codestral-22B-v0.1-abliterated-v3-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
ksw1/llama-3-8b-sleeper-agent | ksw1 | 2024-06-05T22:09:19Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T22:03:39Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** ksw1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
maxorax/maxorax-lora | maxorax | 2024-06-05T21:57:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T21:40:06Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aerdna/quantize_v0.1 | aerdna | 2024-06-05T21:53:25Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-05T21:51:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
antonryoung02/deberta-v3-small-essay-grading-classification | antonryoung02 | 2024-06-05T21:52:23Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-05T00:20:36Z | ---
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-small-essay-grading-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-essay-grading-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8790
- Qwk: 0.7595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 433 | 0.9495 | 0.7132 |
| 1.0266 | 2.0 | 866 | 0.8787 | 0.7561 |
| 1.0266 | 3.0 | 1299 | 0.8790 | 0.7595 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
fofr/consistent-character-weights | fofr | 2024-06-05T21:51:09Z | 0 | 4 | null | [
"onnx",
"region:us"
] | null | 2024-06-05T21:21:21Z | Weights used in the consistent-character ComfyUI workflow:
- https://github.com/fofr/cog-consistent-character
- https://replicate.com/fofr/consistent-character |
tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_3e-4_lora2 | tyzhu | 2024-06-05T21:47:06Z | 4 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | 2024-06-05T08:30:56Z | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
metrics:
- accuracy
model-index:
- name: lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_3e-4_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
type: tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
metrics:
- name: Accuracy
type: accuracy
value: 0.5165079365079365
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_3e-4_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1631
- Accuracy: 0.5165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 1.7567 | 0.9998 | 1089 | 2.2626 | 0.5190 |
| 1.5617 | 1.9995 | 2178 | 2.2436 | 0.5246 |
| 1.343 | 2.9993 | 3267 | 2.3385 | 0.5237 |
| 1.1682 | 4.0 | 4357 | 2.4995 | 0.5215 |
| 1.0141 | 4.9998 | 5446 | 2.6397 | 0.5182 |
| 0.9023 | 5.9995 | 6535 | 2.7929 | 0.5170 |
| 0.8008 | 6.9993 | 7624 | 2.8233 | 0.5162 |
| 0.7377 | 8.0 | 8714 | 2.8833 | 0.5180 |
| 0.6732 | 8.9998 | 9803 | 2.9550 | 0.5165 |
| 0.6225 | 9.9995 | 10892 | 2.9767 | 0.5165 |
| 0.5858 | 10.9993 | 11981 | 3.0117 | 0.5165 |
| 0.5618 | 12.0 | 13071 | 3.0317 | 0.5170 |
| 0.5464 | 12.9998 | 14160 | 3.0686 | 0.5167 |
| 0.5243 | 13.9995 | 15249 | 3.0829 | 0.5149 |
| 0.5066 | 14.9993 | 16338 | 3.0958 | 0.5127 |
| 0.4947 | 16.0 | 17428 | 3.0921 | 0.5153 |
| 0.4841 | 16.9998 | 18517 | 3.1170 | 0.5162 |
| 0.4727 | 17.9995 | 19606 | 3.1375 | 0.5172 |
| 0.4634 | 18.9993 | 20695 | 3.1323 | 0.5150 |
| 0.4468 | 19.9954 | 21780 | 3.1631 | 0.5165 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa_Qwen_Qwen1.5-4B_3e-4_lora2 | tyzhu | 2024-06-05T21:45:56Z | 5 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | 2024-06-05T08:30:45Z | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa
metrics:
- accuracy
model-index:
- name: lmind_hotpot_train8000_eval7405_v1_recite_qa_Qwen_Qwen1.5-4B_3e-4_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa
type: tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa
metrics:
- name: Accuracy
type: accuracy
value: 0.7763580786026201
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_hotpot_train8000_eval7405_v1_recite_qa_Qwen_Qwen1.5-4B_3e-4_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4756
- Accuracy: 0.7764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 1.5356 | 0.9998 | 1089 | 1.3711 | 0.6864 |
| 1.3102 | 1.9995 | 2178 | 1.1753 | 0.7020 |
| 1.0549 | 2.9993 | 3267 | 1.0095 | 0.7164 |
| 0.8461 | 4.0 | 4357 | 0.8722 | 0.7297 |
| 0.701 | 4.9998 | 5446 | 0.7641 | 0.7406 |
| 0.5977 | 5.9995 | 6535 | 0.6797 | 0.7490 |
| 0.5238 | 6.9993 | 7624 | 0.6209 | 0.7559 |
| 0.4742 | 8.0 | 8714 | 0.5837 | 0.7600 |
| 0.438 | 8.9998 | 9803 | 0.5532 | 0.7638 |
| 0.402 | 9.9995 | 10892 | 0.5331 | 0.7664 |
| 0.383 | 10.9993 | 11981 | 0.5156 | 0.7685 |
| 0.3627 | 12.0 | 13071 | 0.5070 | 0.7702 |
| 0.3521 | 12.9998 | 14160 | 0.4984 | 0.7714 |
| 0.344 | 13.9995 | 15249 | 0.4925 | 0.7722 |
| 0.3341 | 14.9993 | 16338 | 0.4847 | 0.7736 |
| 0.3275 | 16.0 | 17428 | 0.4808 | 0.7748 |
| 0.3223 | 16.9998 | 18517 | 0.4776 | 0.7751 |
| 0.3155 | 17.9995 | 19606 | 0.4804 | 0.7758 |
| 0.3033 | 18.9993 | 20695 | 0.4787 | 0.7761 |
| 0.2989 | 19.9954 | 21780 | 0.4756 | 0.7764 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
aerdna/testv0.1 | aerdna | 2024-06-05T21:45:40Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T21:37:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stablediffusionapi/hardcore-hentai | stablediffusionapi | 2024-06-05T21:44:33Z | 31 | 5 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-04-21T04:01:40Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Hardcore Hentai API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "hardcore-hentai"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/hardcore-hentai)
Model link: [View model](https://modelslab.com/models/hardcore-hentai)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "hardcore-hentai",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
ClaudioItaly/EvolutionDark | ClaudioItaly | 2024-06-05T21:38:56Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:ClaudioItaly/TopEvolutionWiz",
"base_model:merge:ClaudioItaly/TopEvolutionWiz",
"base_model:TeeZee/DarkSapling-7B-v1.0",
"base_model:merge:TeeZee/DarkSapling-7B-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T16:51:33Z | ---
base_model:
- TeeZee/DarkSapling-7B-v1.0
- ClaudioItaly/TopEvolutionWiz
library_name: transformers
tags:
- mergekit
- merge
---
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [TeeZee/DarkSapling-7B-v1.0](https://huggingface.co/TeeZee/DarkSapling-7B-v1.0)
* [ClaudioItaly/TopEvolutionWiz](https://huggingface.co/ClaudioItaly/TopEvolutionWiz)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ClaudioItaly/TopEvolutionWiz
- model: TeeZee/DarkSapling-7B-v1.0
merge_method: slerp
base_model: ClaudioItaly/TopEvolutionWiz
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
lmstudio-community/Codestral-22B-v0.1-GGUF | lmstudio-community | 2024-06-05T21:35:08Z | 26,873 | 19 | null | [
"gguf",
"code",
"text-generation",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:quantized:mistralai/Codestral-22B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-29T19:01:50Z | ---
license_link: https://mistral.ai/licences/MNPL-0.1.md
tags:
- code
language:
- code
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
lm_studio:
param_count: 22b
use_case: coding
release_date: 29-05-2024
model_creator: mistralai
prompt_template: Mistral Instruct
system_prompt: none
base_model: mistral
original_repo: mistralai/Codestral-22B-v0.1
base_model: mistralai/Codestral-22B-v0.1
---
## 💫 Community Model> Codestral 22B v0.1 by Mistral AI
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Mistral AI](https://huggingface.co/mistralai)<br>
**Original model**: [Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3024](https://github.com/ggerganov/llama.cpp/releases/tag/b3024)<br>
## Model Summary:
Codestral is a brand new coding model released by the Mistral team. This 22B model is the first of its size and the first ever specialized model released by this team.<br>
Supporting both instruction prompting and popular Fill in the Middle (FIM) tokens for predictions, this model should be all around great for all your coding tasks.
## Prompt template:
Choose the `Mistral Instruct` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<s>[INST] {prompt} [/INST]</s>
```
This model also supports the following FIM tokens:
`[PREFIX]`, `[SUFFIX]`
## Technical Details
Codestral 22B 0.1 is trained on a dataset of 80+ programming languages including of course Python, Java, C++, Javascript, and Bash.
It supports both instruction querying as well as Fill in the Middle querying.
More details and benchmark information can be found on their blogpost here: https://mistral.ai/news/codestral/
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/)
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) and [Dampf](https://github.com/Dampfinchen) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)) that was used for calculating the imatrix for all sizes.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio. |
Luispark2025/Luis_Park | Luispark2025 | 2024-06-05T21:32:14Z | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-06-05T20:06:44Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- generator
model-index:
- name: Luis_Park
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Luis_Park
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 15
- total_train_batch_size: 15
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2094 | 0.8955 | 4 | 1.1494 |
| 1.0733 | 1.7910 | 8 | 1.1463 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.19.2
- Tokenizers 0.19.1 |
Ashmal/MBZUAI-ORYX-new | Ashmal | 2024-06-05T21:31:25Z | 2,904 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-04T08:57:58Z | ---
library_name: transformers
license: apache-2.0
---
This is the Arabic test model built at MBZUAI. More details of the projects will be announced later along with the release. This model card is just to test the capabilities of this model on Arabic benchmarks. |
bartowski/L3-TheSpice-8b-v0.8.3-GGUF | bartowski | 2024-06-05T21:24:34Z | 254 | 9 | null | [
"gguf",
"text-generation",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-25T15:46:16Z | ---
license: cc-by-nc-4.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of L3-TheSpice-8b-v0.8.3
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2717">b2717</a> for quantization.
Original model: https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
{System Prompt}
Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [L3-TheSpice-8b-v0.8.3-Q8_0.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [L3-TheSpice-8b-v0.8.3-Q6_K.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [L3-TheSpice-8b-v0.8.3-Q5_K_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [L3-TheSpice-8b-v0.8.3-Q5_K_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [L3-TheSpice-8b-v0.8.3-Q4_K_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [L3-TheSpice-8b-v0.8.3-Q4_K_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [L3-TheSpice-8b-v0.8.3-IQ4_NL.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [L3-TheSpice-8b-v0.8.3-IQ4_XS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [L3-TheSpice-8b-v0.8.3-Q3_K_L.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [L3-TheSpice-8b-v0.8.3-Q3_K_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [L3-TheSpice-8b-v0.8.3-IQ3_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [L3-TheSpice-8b-v0.8.3-IQ3_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [L3-TheSpice-8b-v0.8.3-Q3_K_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [L3-TheSpice-8b-v0.8.3-IQ3_XS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [L3-TheSpice-8b-v0.8.3-IQ3_XXS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [L3-TheSpice-8b-v0.8.3-Q2_K.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [L3-TheSpice-8b-v0.8.3-IQ2_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [L3-TheSpice-8b-v0.8.3-IQ2_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [L3-TheSpice-8b-v0.8.3-IQ2_XS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [L3-TheSpice-8b-v0.8.3-IQ2_XXS.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [L3-TheSpice-8b-v0.8.3-IQ1_M.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [L3-TheSpice-8b-v0.8.3-IQ1_S.gguf](https://huggingface.co/bartowski/L3-TheSpice-8b-v0.8.3-GGUF/blob/main/L3-TheSpice-8b-v0.8.3-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF | mradermacher | 2024-06-05T21:24:19Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:VictorDCh/Llama-3-8B-Instruct-MoE-4",
"base_model:quantized:VictorDCh/Llama-3-8B-Instruct-MoE-4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-05T18:32:46Z | ---
base_model: VictorDCh/Llama-3-8B-Instruct-MoE-4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/VictorDCh/Llama-3-8B-Instruct-MoE-4
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q2_K.gguf) | Q2_K | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.IQ3_XS.gguf) | IQ3_XS | 20.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.IQ3_S.gguf) | IQ3_S | 21.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q3_K_S.gguf) | Q3_K_S | 21.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.IQ3_M.gguf) | IQ3_M | 22.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q3_K_M.gguf) | Q3_K_M | 23.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q3_K_L.gguf) | Q3_K_L | 24.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.IQ4_XS.gguf) | IQ4_XS | 26.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q4_K_S.gguf) | Q4_K_S | 27.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q4_K_M.gguf) | Q4_K_M | 29.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q5_K_S.gguf) | Q5_K_S | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q5_K_M.gguf) | Q5_K_M | 33.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q6_K.gguf) | Q6_K | 39.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-MoE-4-GGUF/resolve/main/Llama-3-8B-Instruct-MoE-4.Q8_0.gguf.part2of2) | Q8_0 | 50.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
yzhuang/Qwen1.5-7B-Chat-v0.1_fictional_Chinese_v1 | yzhuang | 2024-06-05T21:20:11Z | 156 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:Qwen/Qwen1.5-7B-Chat",
"base_model:finetune:Qwen/Qwen1.5-7B-Chat",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-05T16:05:22Z | ---
license: other
base_model: Qwen/Qwen1.5-7B-Chat
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: Qwen1.5-7B-Chat-v0.1_fictional_Chinese_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen1.5-7B-Chat-v0.1_fictional_Chinese_v1
This model is a fine-tuned version of [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Shansqw/Shans | Shansqw | 2024-06-05T21:18:59Z | 0 | 0 | fairseq | [
"fairseq",
"code",
"translation",
"aa",
"dataset:Replete-AI/code_bagel",
"license:apache-2.0",
"region:us"
] | translation | 2024-06-05T21:17:14Z | ---
license: apache-2.0
datasets:
- Replete-AI/code_bagel
language:
- aa
metrics:
- character
library_name: fairseq
pipeline_tag: translation
tags:
- code
--- |
Subsets and Splits