modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 06:28:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 06:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
damgomz/ft_bs16_lr7_base_x2 | damgomz | 2024-05-17T18:11:56Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-16T15:20:38Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T20:11:35'
project_name: ft_bs16_lr7_base_x2_emissions_tracker
run_id: 6bae0093-717f-46c3-bcdd-8f19fcccb3a8
duration: 36881.51029586792
emissions: 0.0226794872442836
emissions_rate: 6.149283763692448e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 4.500000000000001
cpu_energy: 0.4354060074516468
gpu_energy: 0
ram_energy: 0.0461015453451873
energy_consumed: 0.4815075527968334
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 12
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 36881.51029586792 |
| Emissions (Co2eq in kg) | 0.0226794872442836 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 4.500000000000001 |
| CPU energy (kWh) | 0.4354060074516468 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0461015453451873 |
| Consumed energy (kWh) | 0.4815075527968334 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.07099690731954574 |
| Emissions (Co2eq in kg) | 0.014445258199214933 |
## Note
17 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_bs16_lr7_base_x2 |
| sequence_length | 400 |
| num_epoch | 15 |
| learning_rate | 5e-07 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 81450 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.588079 | 0.517896 | 0.740059 | 0.863497 |
| 1 | 0.477829 | 0.460693 | 0.781296 | 0.848160 |
| 2 | 0.429019 | 0.428960 | 0.805596 | 0.881902 |
| 3 | 0.391332 | 0.404802 | 0.807806 | 0.832822 |
| 4 | 0.368315 | 0.398500 | 0.819588 | 0.863497 |
| 5 | 0.350588 | 0.389129 | 0.821060 | 0.863497 |
| 6 | 0.335994 | 0.382235 | 0.822533 | 0.874233 |
| 7 | 0.324425 | 0.373543 | 0.834315 | 0.838957 |
| 8 | 0.310990 | 0.373090 | 0.831370 | 0.854294 |
| 9 | 0.300017 | 0.368493 | 0.834315 | 0.849693 |
| 10 | 0.286613 | 0.377919 | 0.832842 | 0.872699 |
| 11 | 0.275215 | 0.370514 | 0.836524 | 0.831288 |
| 12 | 0.260308 | 0.383199 | 0.834315 | 0.872699 |
| 13 | 0.249657 | 0.378506 | 0.837997 | 0.842025 |
| 14 | 0.234344 | 0.385054 | 0.834315 | 0.835890 |
|
apwic/sentiment-lora-r4a0d0.05-0 | apwic | 2024-05-17T18:10:15Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-17T17:36:35Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r4a0d0.05-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r4a0d0.05-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3486
- Accuracy: 0.8396
- Precision: 0.8055
- Recall: 0.8115
- F1: 0.8084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5619 | 1.0 | 122 | 0.5127 | 0.7168 | 0.6536 | 0.6446 | 0.6484 |
| 0.5059 | 2.0 | 244 | 0.4967 | 0.7343 | 0.6956 | 0.7220 | 0.7022 |
| 0.4822 | 3.0 | 366 | 0.4506 | 0.7469 | 0.7006 | 0.7159 | 0.7065 |
| 0.4402 | 4.0 | 488 | 0.3984 | 0.8195 | 0.7876 | 0.7623 | 0.7728 |
| 0.4068 | 5.0 | 610 | 0.4136 | 0.7870 | 0.7473 | 0.7718 | 0.7561 |
| 0.3791 | 6.0 | 732 | 0.3771 | 0.8321 | 0.7972 | 0.7987 | 0.7979 |
| 0.3635 | 7.0 | 854 | 0.3916 | 0.8195 | 0.7822 | 0.8048 | 0.7912 |
| 0.3433 | 8.0 | 976 | 0.3799 | 0.8296 | 0.7934 | 0.8019 | 0.7974 |
| 0.3379 | 9.0 | 1098 | 0.3714 | 0.8271 | 0.7903 | 0.8026 | 0.7959 |
| 0.3296 | 10.0 | 1220 | 0.3635 | 0.8371 | 0.8032 | 0.8047 | 0.8040 |
| 0.3105 | 11.0 | 1342 | 0.3652 | 0.8296 | 0.7933 | 0.8044 | 0.7984 |
| 0.3024 | 12.0 | 1464 | 0.3702 | 0.8346 | 0.7988 | 0.8180 | 0.8069 |
| 0.309 | 13.0 | 1586 | 0.3512 | 0.8371 | 0.8032 | 0.8047 | 0.8040 |
| 0.3021 | 14.0 | 1708 | 0.3505 | 0.8396 | 0.8060 | 0.8090 | 0.8075 |
| 0.2903 | 15.0 | 1830 | 0.3553 | 0.8421 | 0.8077 | 0.8208 | 0.8136 |
| 0.2834 | 16.0 | 1952 | 0.3530 | 0.8396 | 0.8046 | 0.8215 | 0.8119 |
| 0.2811 | 17.0 | 2074 | 0.3471 | 0.8446 | 0.8120 | 0.8151 | 0.8135 |
| 0.288 | 18.0 | 2196 | 0.3505 | 0.8446 | 0.8107 | 0.8226 | 0.8161 |
| 0.277 | 19.0 | 2318 | 0.3479 | 0.8396 | 0.8055 | 0.8115 | 0.8084 |
| 0.2775 | 20.0 | 2440 | 0.3486 | 0.8396 | 0.8055 | 0.8115 | 0.8084 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
hideax/llama2_stage2_iter40000_chatbot_arena_orpo_3 | hideax | 2024-05-17T18:03:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T17:54:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/stanford-oval_-_Llama-2-7b-WikiChat-fused-8bits | RichardErkhov | 2024-05-17T18:02:29Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2305.14292",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-17T17:55:38Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-7b-WikiChat-fused - bnb 8bits
- Model creator: https://huggingface.co/stanford-oval/
- Original model: https://huggingface.co/stanford-oval/Llama-2-7b-WikiChat-fused/
Original model description:
---
license: llama2
language:
- en
---
This model is a fine-tuned LLaMA-2 (7B) model. Please accept the [LLaMA-2 license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) before downloading this model.
Refer to the following for more information:
GitHub repository: https://github.com/stanford-oval/WikiChat
Paper: https://aclanthology.org/2023.findings-emnlp.157/
<p align="center">
<img src="./images/wikipedia.png" width="100px" alt="Wikipedia" />
<h1 align="center">
<b>WikiChat</b>
<br>
<a href="https://arxiv.org/abs/2305.14292">
<img src="https://img.shields.io/badge/cs.CL-2305.14292-b31b1b" alt="arXiv">
</a>
<a href="https://github.com/stanford-oval/WikiChat/stargazers">
<img src="https://img.shields.io/github/stars/stanford-oval/WikiChat?style=social" alt="Github Stars">
</a>
</h1>
</p>
<p align="center">
Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
</p>
<p align="center">
Online demo:
<a href="https://wikichat.genie.stanford.edu" target="_blank">
https://wikichat.genie.stanford.edu
</a>
<br>
</p>
<p align="center">
<img src="./images/pipeline.svg" width="700px" alt="WikiChat Pipeline" />
</p>
|
RichardErkhov/lcw99_-_zephykor-ko-beta-7b-chang-4bits | RichardErkhov | 2024-05-17T18:01:39Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-17T17:56:50Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
zephykor-ko-beta-7b-chang - bnb 4bits
- Model creator: https://huggingface.co/lcw99/
- Original model: https://huggingface.co/lcw99/zephykor-ko-beta-7b-chang/
Original model description:
---
language:
- ko
- en
---
* Under construction, be carefull.
|
minindu-liya99/Reinforce-PixelCopter | minindu-liya99 | 2024-05-17T18:00:23Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-04-16T18:22:37Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.60 +/- 20.80
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
PaulR79/phi2_finetuned_synthetic | PaulR79 | 2024-05-17T17:59:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T17:59:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
theglassofwater/mistral_pretraining_1 | theglassofwater | 2024-05-17T17:57:45Z | 209 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T17:57:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
emilykang/medmcqa_question_generation-physiology_lora | emilykang | 2024-05-17T17:47:44Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T17:15:38Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- generator
model-index:
- name: medmcqa_question_generation-physiology_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medmcqa_question_generation-physiology_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
MAli-Farooq/ChildDiffusion | MAli-Farooq | 2024-05-17T17:43:36Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-05-17T17:25:24Z | ---
license: mit
---
ChildDiffusion Model for rendering high quality child facial data with smart transformations.
Model tuned and uploaded by Muhammad Ali Farooq, PhD |
giannisan/dolphin-einstein-llama3-dare-ties | giannisan | 2024-05-17T17:42:20Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Weyaxi/Einstein-v6.1-Llama3-8B",
"base_model:merge:Weyaxi/Einstein-v6.1-Llama3-8B",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"base_model:merge:cognitivecomputations/dolphin-2.9-llama3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T17:23:17Z | ---
base_model:
- cognitivecomputations/dolphin-2.9-llama3-8b
- Weyaxi/Einstein-v6.1-Llama3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# dolphin-einstein-llama3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) as a base.
### Models Merged
The following models were included in the merge:
* [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cognitivecomputations/dolphin-2.9-llama3-8b
- model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
weight: 0.5
density: 0.8
merge_method: dare_ties
base_model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
int8_mask: true
dtype: bfloat16
```
|
Gajebald/my-autotrain-llm | Gajebald | 2024-05-17T17:38:51Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T09:25:11Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
pipeline_tag: text-generation
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "Gajebald/my-autotrain-llm"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
apwic/sentiment-lora-r2a2d0.15-0 | apwic | 2024-05-17T17:36:18Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-17T17:03:05Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r2a2d0.15-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r2a2d0.15-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3672
- Accuracy: 0.8321
- Precision: 0.7961
- Recall: 0.8087
- F1: 0.8018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.563 | 1.0 | 122 | 0.5138 | 0.7243 | 0.6636 | 0.6549 | 0.6586 |
| 0.509 | 2.0 | 244 | 0.5057 | 0.7168 | 0.6763 | 0.6996 | 0.6820 |
| 0.4924 | 3.0 | 366 | 0.4708 | 0.7393 | 0.6877 | 0.6931 | 0.6901 |
| 0.468 | 4.0 | 488 | 0.4379 | 0.7845 | 0.7412 | 0.7200 | 0.7286 |
| 0.4495 | 5.0 | 610 | 0.4466 | 0.7594 | 0.7233 | 0.7548 | 0.7313 |
| 0.4334 | 6.0 | 732 | 0.4041 | 0.8271 | 0.7927 | 0.7851 | 0.7887 |
| 0.415 | 7.0 | 854 | 0.4057 | 0.7995 | 0.7590 | 0.7756 | 0.7660 |
| 0.3974 | 8.0 | 976 | 0.3852 | 0.8321 | 0.7982 | 0.7937 | 0.7959 |
| 0.3849 | 9.0 | 1098 | 0.3829 | 0.8246 | 0.7880 | 0.7909 | 0.7894 |
| 0.3771 | 10.0 | 1220 | 0.3786 | 0.8396 | 0.8065 | 0.8065 | 0.8065 |
| 0.3633 | 11.0 | 1342 | 0.3843 | 0.8296 | 0.7931 | 0.8069 | 0.7993 |
| 0.3591 | 12.0 | 1464 | 0.3833 | 0.8296 | 0.7931 | 0.8069 | 0.7993 |
| 0.354 | 13.0 | 1586 | 0.3705 | 0.8396 | 0.8065 | 0.8065 | 0.8065 |
| 0.3451 | 14.0 | 1708 | 0.3709 | 0.8371 | 0.8028 | 0.8072 | 0.8049 |
| 0.3403 | 15.0 | 1830 | 0.3733 | 0.8321 | 0.7960 | 0.8112 | 0.8027 |
| 0.3282 | 16.0 | 1952 | 0.3715 | 0.8346 | 0.7988 | 0.8155 | 0.8061 |
| 0.3286 | 17.0 | 2074 | 0.3664 | 0.8321 | 0.7965 | 0.8037 | 0.7999 |
| 0.3348 | 18.0 | 2196 | 0.3670 | 0.8271 | 0.7904 | 0.8001 | 0.7949 |
| 0.325 | 19.0 | 2318 | 0.3669 | 0.8321 | 0.7961 | 0.8087 | 0.8018 |
| 0.3266 | 20.0 | 2440 | 0.3672 | 0.8321 | 0.7961 | 0.8087 | 0.8018 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
cgus/MiniChat-2-3B-iMat-GGUF | cgus | 2024-05-17T17:35:49Z | 26 | 0 | transformers | [
"transformers",
"gguf",
"en",
"zh",
"arxiv:2311.07052",
"arxiv:2310.05914",
"arxiv:2305.18290",
"base_model:GeneZC/MiniChat-2-3B",
"base_model:quantized:GeneZC/MiniChat-2-3B",
"license:apache-2.0",
"region:us",
"imatrix"
] | null | 2024-05-15T17:59:54Z | ---
license: apache-2.0
language:
- en
- zh
inference: false
library_name: transformers
base_model: GeneZC/MiniChat-2-3B
widget:
- text: "<s> [|User|] Hi 👋 </s>[|Assistant|]"
---
## MiniChat-2-3B-iMat-GGUF
Original model: [MiniChat-2-3B](https://huggingface.co/GeneZC/MiniChat-2-3B)
Model creator: [GeneZC](https://huggingface.co/GeneZC)
## Quantization notes
Quantized with llama.cpp b2885. All quants are made with iMatrix file based on the default Exllamav2 dataset.
## How to run
GGUF quants are supported by wide variety of software such as llama.cpp, ollama, Text Generation WebUI, LM Studio, Jan AI and many others.
# Original model card:
## MiniChat-2-3B
📑 [arXiv](https://arxiv.org/abs/2311.07052) | 👻 [GitHub](https://github.com/GeneZC/MiniMA) | 🤗 [HuggingFace-MiniMA](https://huggingface.co/GeneZC/MiniMA-3B) | 🤗 [HuggingFace-MiniChat](https://huggingface.co/GeneZC/MiniChat-3B) | 🤖 [ModelScope-MiniMA](https://modelscope.cn/models/GeneZC/MiniMA-3B) | 🤖 [ModelScope-MiniChat](https://modelscope.cn/models/GeneZC/MiniChat-3B) | 🤗 [HuggingFace-MiniChat-1.5](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | 🤗 [HuggingFace-MiniMA-2](https://huggingface.co/GeneZC/MiniMA-2-3B) | 🤗 [HuggingFace-MiniChat-2](https://huggingface.co/GeneZC/MiniChat-2-3B)
🆕 **Updates from MiniChat-3B**:
- better base model MiniMA-2-3B;
- better data mixture;
- use of [NEFTune](https://arxiv.org/abs/2310.05914);
- use of [DPO](https://arxiv.org/abs/2305.18290).
❗ Must comply with LICENSE of LLaMA2 since it is derived from LLaMA2.
A language model continued from MiniMA-3B and finetuned on both instruction and preference data.
Surpassing Vicuna-7B and approximating LLaMA-2-Chat-7B on MT-Bench.
<img src="https://huggingface.co/GeneZC/MiniChat-2-3B/resolve/main/teaser_b.jpg" alt="teaser_b" width="687" />
**Standard Benchmarks**
|Method|TFLOPs|MMLU (5-shot)|CEval (5-shot)|DROP (3-shot)|HumanEval (0-shot)|BBH (3-shot)|GSM8K (8-shot)|
|--|--|--|--|--|--|--|--|
|Mamba-2.8B|4.6E9|25.58|24.74|15.72|7.32|29.37|3.49|
|ShearedLLaMA-2.7B|0.8E9|26.97|22.88|19.98|4.88|30.48|3.56|
|BTLM-3B|11.3E9|27.20|26.00|17.84|10.98|30.87|4.55|
|StableLM-3B|72.0E9|44.75|31.05|22.35|15.85|32.59|10.99|
|Qwen-1.8B|23.8E9|44.05|54.75|12.97|14.02|30.80|22.97|
|Phi-2-2.8B|159.9E9|56.74|34.03|30.74|46.95|44.13|55.42|
|LLaMA-2-7B|84.0E9|46.00|34.40|31.57|12.80|32.02|14.10|
||
|MiniMA-3B|4.0E9|28.51|28.23|22.50|10.98|31.61|8.11|
|MiniChat-3B|4.0E9|38.40|36.48|22.58|18.29|31.36|29.72|
|MiniMA-2-3B|13.4E9|40.14|44.65|23.10|14.63|31.43|8.87|
|MiniChat-2-3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13|
**Instruction-following Benchmarks**
|Method|AlpacaEval|MT-Bench|MT-Bench-ZH|
|--|--|--|--|
|GPT-4|95.28|9.18|8.96|
|Zephyr-7B-Beta|90.60|7.34|6.27<sup>#</sup>|
|Vicuna-7B|76.84|6.17|5.22<sup>#</sup>|
|LLaMA-2-Chat-7B|71.37|6.27|5.43<sup>#</sup>|
|Qwen-Chat-7B|-|-|6.24|
|Phi-2-DPO|81.37|-|1.59<sup>#</sup><sup>$</sup>|
|StableLM-Zephyr-3B|76.00|6.64|4.31<sup>#</sup>|
|Rocket-3B|79.75|6.56|4.07<sup>#</sup>|
|Qwen-Chat-1.8B|-|-|5.65|
||
|MiniChat-3B|48.82|-|-|
|MiniChat-2-3B|77.30|6.23|6.04|
<sup>#</sup> specialized mainly for English.
<sup>$</sup> finetuned without multi-turn instruction data.
The following is an example code snippet to use MiniChat-2-3B:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from conversation import get_default_conv_template
# MiniChat
tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniChat-2-3B", use_fast=False)
# GPU.
model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-2-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval()
# CPU.
# model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniChat-2-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval()
conv = get_default_conv_template("minichat")
question = "Implement a program to find the common elements in two arrays without using any extra data structures."
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer([prompt]).input_ids
output_ids = model.generate(
torch.as_tensor(input_ids).cuda(),
do_sample=True,
temperature=0.7,
max_new_tokens=1024,
)
output_ids = output_ids[0][len(input_ids[0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
# output: "def common_elements(arr1, arr2):\n if len(arr1) == 0:\n return []\n if len(arr2) == 0:\n return arr1\n\n common_elements = []\n for element in arr1:\n if element in arr2:\n common_elements.append(element)\n\n return common_elements"
# Multiturn conversation could be realized by continuously appending questions to `conv`.
```
## Bibtex
```bibtex
@article{zhang2023law,
title={Towards the Law of Capacity Gap in Distilling Language Models},
author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan},
year={2023},
url={https://arxiv.org/abs/2311.07052}
}
``` |
nc33/llama3-8b-4bit_orpo_law | nc33 | 2024-05-17T17:32:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-16T09:29:40Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wovik253/0001softrealistic_v187xxx | wovik253 | 2024-05-17T17:31:02Z | 0 | 0 | null | [
"realistic, nsfw, girl, portreit",
"text-to-image",
"arxiv:1910.09700",
"region:us"
] | text-to-image | 2024-05-17T16:36:28Z | ---
pipeline_tag: text-to-image
tags:
- realistic, nsfw, girl, portreit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
presencesw/phobert-large-snli_entailment-triplet | presencesw | 2024-05-17T17:25:54Z | 181 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-13T09:53:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kha37lid/autotrain-5pwz2-t4v28 | Kha37lid | 2024-05-17T17:21:55Z | 13 | 0 | diffusers | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-17T17:21:52Z |
---
tags:
- autotrain
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <khalid>
license: openrail++
---
# AutoTrain SDXL LoRA DreamBooth - Kha37lid/autotrain-5pwz2-t4v28
<Gallery />
## Model description
These are Kha37lid/autotrain-5pwz2-t4v28 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use <khalid> to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Kha37lid/autotrain-5pwz2-t4v28/tree/main) them in the Files & versions tab.
|
DokHee/JSLLMV4 | DokHee | 2024-05-17T17:20:31Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-17T17:04:23Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tomasonjo/text2cypher-demo-16bit | tomasonjo | 2024-05-17T17:18:08Z | 301 | 23 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:tomasonjo/text2cypher-gpt4o-clean",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:finetune:unsloth/llama-3-8b-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T13:36:05Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-Instruct
datasets:
- tomasonjo/text2cypher-gpt4o-clean
---
# Uploaded model
- **Developed by:** tomasonjo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
**For more information visit [this link](https://github.com/neo4j-labs/text2cypher/tree/main/finetuning/unsloth-llama3#using-chat-prompt-template)**
## Example usage:
Install dependencies. Check [Unsloth documentation](https://github.com/unslothai/unsloth) for specific installation for other environments.
````python
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes
````
Then you can load the model and use it as inference
```python
from unsloth.chat_templates import get_chat_template
tokenizer = get_chat_template(
tokenizer,
chat_template = "llama-3",
map_eos_token = True,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
schema = """Node properties: - **Question** - `favorites`: INTEGER Example: "0" - `answered`: BOOLEAN - `text`: STRING Example: "### This is: Bug ### Specifications OS: Win10" - `link`: STRING Example: "https://stackoverflow.com/questions/62224586/playg" - `createdAt`: DATE_TIME Min: 2020-06-05T16:57:19Z, Max: 2020-06-05T21:49:16Z - `title`: STRING Example: "Playground is not loading with apollo-server-lambd" - `id`: INTEGER Min: 62220505, Max: 62224586 - `upVotes`: INTEGER Example: "0" - `score`: INTEGER Example: "-1" - `downVotes`: INTEGER Example: "1" - **Tag** - `name`: STRING Example: "aws-lambda" - **User** - `image`: STRING Example: "https://lh3.googleusercontent.com/-NcFYSuXU0nk/AAA" - `link`: STRING Example: "https://stackoverflow.com/users/10251021/alexandre" - `id`: INTEGER Min: 751, Max: 13681006 - `reputation`: INTEGER Min: 1, Max: 420137 - `display_name`: STRING Example: "Alexandre Le" Relationship properties: The relationships: (:Question)-[:TAGGED]->(:Tag) (:User)-[:ASKED]->(:Question)"""
question = "Identify the top 5 questions with the most downVotes."
messages = [
{"role": "system", "content": "Given an input question, convert it to a Cypher query. No pre-amble."},
{"role": "user", "content": f"""Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:
{schema}
Question: {question}
Cypher query:"""}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
outputs = model.generate(input_ids = inputs, max_new_tokens = 128, use_cache = True)
tokenizer.batch_decode(outputs)
``` |
emilykang/medmcqa_question_generation-pediatrics_lora | emilykang | 2024-05-17T17:15:33Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T16:42:11Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- generator
model-index:
- name: medmcqa_question_generation-pediatrics_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medmcqa_question_generation-pediatrics_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
worldboss/meta-llama3-8b-alpaca-qlora-peft-axolotl-merged | worldboss | 2024-05-17T17:12:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T17:02:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
taufeeq28/vehicles | taufeeq28 | 2024-05-17T17:12:34Z | 222 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-17T17:12:28Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vehicles
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8358209133148193
---
# vehicles
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bikes

#### cars

#### cycles
 |
Ellight/whisper-tiny-en | Ellight | 2024-05-17T17:10:49Z | 90 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-17T13:13:07Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3140495867768595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5100
- Wer Ortho: 0.3233
- Wer: 0.3140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 5
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|
| 0.101 | 3.5714 | 100 | 0.5100 | 0.3233 | 0.3140 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mdosama39/mt5-base-headline-base | mdosama39 | 2024-05-17T17:06:32Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-17T16:57:15Z | ---
license: apache-2.0
base_model: google/mt5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-headline-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-headline-base
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6856
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 16.0174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.5318 | 1.0 | 202 | 1.8787 | 0.0 | 0.0 | 0.0 | 0.0 | 16.4268 |
| 2.2047 | 2.0 | 404 | 1.7674 | 0.0 | 0.0 | 0.0 | 0.0 | 15.5285 |
| 2.1322 | 3.0 | 606 | 1.7092 | 0.0 | 0.0 | 0.0 | 0.0 | 15.866 |
| 1.7199 | 4.0 | 808 | 1.6856 | 0.0 | 0.0 | 0.0 | 0.0 | 16.0174 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
pt4c/opus-mt-fr-yat | pt4c | 2024-05-17T17:06:16Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-fr-en",
"base_model:finetune:Helsinki-NLP/opus-mt-fr-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-17T16:34:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: Helsinki-NLP/opus-mt-fr-en
model-index:
- name: opus-mt-fr-yat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-fr-yat
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-fr-en](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7630
- Bert score: 0.6005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bert score |
|:-------------:|:-----:|:----:|:---------------:|:----------:|
| No log | 1.0 | 62 | 7.7730 | 0.5980 |
| No log | 2.0 | 124 | 6.9707 | 0.5976 |
| No log | 3.0 | 186 | 6.7630 | 0.6005 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bartowski/Llama-3-8B-Synthia-v3.5-GGUF | bartowski | 2024-05-17T17:04:28Z | 127 | 1 | null | [
"gguf",
"text-generation",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-17T16:47:37Z | ---
license: llama3
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-8B-Synthia-v3.5
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2901">b2901</a> for quantization.
Original model: https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-Synthia-v3.5-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-8B-Synthia-v3.5-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-8B-Synthia-v3.5-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-8B-Synthia-v3.5-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-8B-Synthia-v3.5-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-8B-Synthia-v3.5-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-8B-Synthia-v3.5-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-8B-Synthia-v3.5-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-8B-Synthia-v3.5-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-8B-Synthia-v3.5-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-8B-Synthia-v3.5-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-8B-Synthia-v3.5-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-8B-Synthia-v3.5-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Synthia-v3.5-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Synthia-v3.5-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-Synthia-v3.5-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Llama-3-8B-Synthia-v3.5-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-Synthia-v3.5-GGUF/blob/main/Llama-3-8B-Synthia-v3.5-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-8B-Synthia-v3.5-GGUF --include "Llama-3-8B-Synthia-v3.5-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-8B-Synthia-v3.5-GGUF --include "Llama-3-8B-Synthia-v3.5-Q8_0.gguf/*" --local-dir Llama-3-8B-Synthia-v3.5-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Llama-3-8B-Synthia-v3.5-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Fernando1305/llama-3-7b-chat-guanacoPrueba | Fernando1305 | 2024-05-17T16:59:32Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-05-17T16:17:10Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
guillaumephd/t5-french-base | guillaumephd | 2024-05-17T16:54:32Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"fr",
"dataset:togethercomputer/RedPajama-Data-V2",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-17T16:01:37Z | ---
license: cc-by-4.0
datasets:
- togethercomputer/RedPajama-Data-V2
language:
- fr
library_name: transformers
---
# T5-french-base Model
## Model Overview
The T5-French-Base model is a ~250M params only T5 model trained (entirely from scratch) solely on French data from the RedPajama 2 dataset.
This model was trained for 85,000 steps and was only pre-trained from scratch without any supervised training.
Therefore, this model has to be fine-tuned before it is useable on a downstream task.
It is intended to serve as a foundation for further fine-tuning and as a starting point for downstream tasks in the French language.
Since the training compute buget was very limited, the model is mainly useful for research only.
## Model Details
- Model Architecture: T5 Base, version 1.1 (GEGLU activation in feed-forward hidden layer, rather than ReLU)
- Training Dataset: RedPajama 2 dataset (French-only)
- Training Steps: 85,000 (from scratch)
- Tokenizer: T5 Tokenizer
## Intended Use
The T5-French-Base model is intended to be used for research only, in order to serve as a pre-trained model for further fine-tuning on specific French language tasks.
It may be used as a starting point for fine-tuning on tasks such as:
- French text generation
- French question answering
- French language understanding
- French text summarization
## Limitations
The T5-French-Base model may not be suitable for user-facing, or production applications.
It is mainly meant for researchers only.
It was trained entirely from scratch.
The training budget was really limited (85k steps only, ~250M params only, for a final loss of ~1.1).
The model is a base model that hasn't been fine-tuned yet. As such, it does NOT follow instructions.
Additionally, the model was trained solely on French data and won't work for tasks that require cross-lingual understanding or multilingual capabilities.
## Ethical Considerations
The T5-French-Base model was trained from scratch on publicly available data and does not contain any known biases or ethical concerns.
However, researchers should be aware of potential biases in the RedPajama 2 training data and should carefully evaluate the model's outputs for any unintended consequences.
## Citation
If you use the RedPajama-T5-Base-French model in your work, please cite the original Google T5 model, as well as the following:
```
@article{guillaumeT5french,
title={T5-French-Base model: A T5 model trained on french data only},
author={guillaumephd},
url={https://huggingface.co/guillaumephd/t5-french-base},
year={2024}
}
``` |
abbenedek/abbenedekwhisper-tiny.en-finetuning3-D3K | abbenedek | 2024-05-17T16:53:26Z | 124 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny.en",
"base_model:finetune:openai/whisper-tiny.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-17T15:08:45Z | ---
license: apache-2.0
base_model: openai/whisper-tiny.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: abbenedekwhisper-tiny.en-finetuning3-D3K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# abbenedekwhisper-tiny.en-finetuning3-D3K
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2102
- Cer: 48.9705
- Wer: 91.3907
- Ser: 100.0
- Cer Clean: 6.0657
- Wer Clean: 12.9139
- Ser Clean: 13.1579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-08
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer | Ser | Cer Clean | Wer Clean | Ser Clean |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|:-----:|:---------:|:---------:|:---------:|
| 6.2196 | 1.06 | 200 | 5.5899 | 52.5320 | 112.9139 | 100.0 | 7.3456 | 14.2384 | 14.9123 |
| 5.2943 | 2.13 | 400 | 4.9201 | 52.4763 | 110.2649 | 100.0 | 7.6238 | 14.9007 | 15.7895 |
| 4.5662 | 3.19 | 600 | 4.4164 | 51.1964 | 105.6291 | 100.0 | 7.6238 | 14.9007 | 15.7895 |
| 4.0943 | 4.26 | 800 | 4.0825 | 50.5843 | 103.3113 | 100.0 | 7.1786 | 14.5695 | 14.9123 |
| 3.6948 | 5.32 | 1000 | 3.7923 | 51.5303 | 101.9868 | 100.0 | 6.3439 | 12.9139 | 13.1579 |
| 3.3742 | 6.38 | 1200 | 3.5565 | 50.3617 | 98.3444 | 100.0 | 6.3439 | 13.5762 | 14.0351 |
| 3.1519 | 7.45 | 1400 | 3.3895 | 49.0262 | 93.7086 | 100.0 | 6.3439 | 13.5762 | 14.0351 |
| 2.9995 | 8.51 | 1600 | 3.2845 | 48.6366 | 92.7152 | 100.0 | 6.3439 | 13.5762 | 14.0351 |
| 2.9152 | 9.57 | 1800 | 3.2282 | 47.9688 | 91.7219 | 100.0 | 6.0657 | 12.9139 | 13.1579 |
| 2.884 | 10.64 | 2000 | 3.2102 | 48.9705 | 91.3907 | 100.0 | 6.0657 | 12.9139 | 13.1579 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.14.5
- Tokenizers 0.15.2
|
santoshtyss/lex-32k-1300 | santoshtyss | 2024-05-17T16:52:11Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T16:35:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xenova/tiny-random-GemmaForCausalLM | Xenova | 2024-05-17T16:49:41Z | 417 | 3 | transformers | [
"transformers",
"onnx",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T00:35:17Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
damgomz/ft_bs32_lr6_base_x4 | damgomz | 2024-05-17T16:48:10Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-17T09:32:40Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T18:48:05'
project_name: ft_bs32_lr6_base_x4_emissions_tracker
run_id: 9280d1e4-ef4e-4526-bbd2-72f003482752
duration: 31476.51491165161
emissions: 0.0193557967525063
emissions_rate: 6.149282030375424e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 4.500000000000001
cpu_energy: 0.3715970706832078
gpu_energy: 0
ram_energy: 0.0393453032049536
energy_consumed: 0.4109423738881623
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 12
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 31476.51491165161 |
| Emissions (Co2eq in kg) | 0.0193557967525063 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 4.500000000000001 |
| CPU energy (kWh) | 0.3715970706832078 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0393453032049536 |
| Consumed energy (kWh) | 0.4109423738881623 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.060592291204929344 |
| Emissions (Co2eq in kg) | 0.012328301673730212 |
## Note
17 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_bs32_lr6_base_x4 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 65160 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.464978 | 0.393267 | 0.820324 | 0.863497 |
| 1 | 0.357935 | 0.372957 | 0.837261 | 0.881902 |
| 2 | 0.307998 | 0.368084 | 0.840206 | 0.819018 |
| 3 | 0.270172 | 0.422406 | 0.818851 | 0.742331 |
| 4 | 0.219624 | 0.438727 | 0.824742 | 0.814417 |
| 5 | 0.160742 | 0.480765 | 0.811487 | 0.891104 |
| 6 | 0.098260 | 0.613929 | 0.811487 | 0.866564 |
| 7 | 0.068914 | 0.678225 | 0.814433 | 0.771472 |
| 8 | 0.034506 | 0.775600 | 0.809278 | 0.826687 |
| 9 | 0.030098 | 0.782375 | 0.811487 | 0.842025 |
| 10 | 0.035147 | 0.840999 | 0.804860 | 0.849693 |
| 11 | 0.019798 | 0.855098 | 0.821060 | 0.860429 |
|
damgomz/ft_bs16_lr6_base_x4 | damgomz | 2024-05-17T16:47:33Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-17T09:29:03Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T18:47:30'
project_name: ft_bs16_lr6_base_x4_emissions_tracker
run_id: 80ff29c1-d4ea-4272-b363-794bf58f2de3
duration: 31525.332528352737
emissions: 0.0193858122759137
emissions_rate: 6.149280823121802e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 4.500000000000001
cpu_energy: 0.3721733759154877
gpu_energy: 0
ram_energy: 0.0394062567019462
energy_consumed: 0.4115796326174337
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 12
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 31525.332528352737 |
| Emissions (Co2eq in kg) | 0.0193858122759137 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 4.500000000000001 |
| CPU energy (kWh) | 0.3721733759154877 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0394062567019462 |
| Consumed energy (kWh) | 0.4115796326174337 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.060686265117079016 |
| Emissions (Co2eq in kg) | 0.012347421906938156 |
## Note
17 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_bs16_lr6_base_x4 |
| sequence_length | 400 |
| num_epoch | 12 |
| learning_rate | 5e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 65160 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.448223 | 0.404151 | 0.823270 | 0.918712 |
| 1 | 0.347356 | 0.380528 | 0.839470 | 0.904908 |
| 2 | 0.298852 | 0.393898 | 0.837261 | 0.829755 |
| 3 | 0.248956 | 0.408337 | 0.831370 | 0.800613 |
| 4 | 0.188713 | 0.523134 | 0.826951 | 0.811350 |
| 5 | 0.127459 | 0.518127 | 0.814433 | 0.855828 |
| 6 | 0.073867 | 0.667144 | 0.815169 | 0.874233 |
| 7 | 0.046921 | 0.809258 | 0.814433 | 0.812883 |
| 8 | 0.036878 | 0.876000 | 0.803387 | 0.838957 |
| 9 | 0.036106 | 0.637194 | 0.809278 | 0.762270 |
| 10 | 0.027892 | 0.864272 | 0.817378 | 0.785276 |
| 11 | 0.011581 | 0.962748 | 0.812960 | 0.832822 |
|
justin-shopcapsule/BLIP-dress | justin-shopcapsule | 2024-05-17T16:45:11Z | 64 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-17T16:41:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bzbr/a2c-PandaReachDense-v3 | Bzbr | 2024-05-17T16:42:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-17T16:37:55Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
emilykang/medmcqa_question_generation-pathology_lora | emilykang | 2024-05-17T16:42:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T15:51:41Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- generator
model-index:
- name: medmcqa_question_generation-pathology_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medmcqa_question_generation-pathology_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
XueyingJia/llama3_4_bit_mnli_0_shot_transformed_data_score_use_full_row_dataset | XueyingJia | 2024-05-17T16:39:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T16:39:54Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** XueyingJia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arianhosseini/patricia-walters-darkmagenta | arianhosseini | 2024-05-17T16:36:51Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"generated_from_trainer",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T13:46:33Z | ---
license: apache-2.0
base_model: EleutherAI/pythia-2.8b
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: patricia-walters-darkmagenta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# patricia-walters-darkmagenta
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5059
- Accuracy: 0.7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 24
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4856 | 1.2490 | 400 | 0.3020 | 1.0 |
| 0.0737 | 2.4980 | 800 | 0.4773 | 0.7 |
| 0.0886 | 3.7471 | 1200 | 1.2119 | 0.9 |
| 0.1489 | 4.9961 | 1600 | 0.5459 | 0.8 |
| 0.0285 | 6.2451 | 2000 | 2.4004 | 0.7 |
| 0.0338 | 7.4941 | 2400 | 0.5059 | 0.7 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
OwOpeepeepoopoo/DancingElaineL | OwOpeepeepoopoo | 2024-05-17T16:35:07Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T16:32:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
XueyingJia/llama3_mnli_0_shot_transformed_data_score_use_full_row_dataset | XueyingJia | 2024-05-17T16:32:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T16:32:46Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: meta-llama/Meta-Llama-3-8B
---
# Uploaded model
- **Developed by:** XueyingJia
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Meta-Llama-3-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Onlysmokehuazi/Huazi_Sentiment_Analysis_latest | Onlysmokehuazi | 2024-05-17T16:29:35Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-17T16:28:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/sentiment-lora-r2a2d0.05-0 | apwic | 2024-05-17T16:29:19Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-17T15:56:11Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r2a2d0.05-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r2a2d0.05-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3642
- Accuracy: 0.8346
- Precision: 0.7993
- Recall: 0.8080
- F1: 0.8034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5633 | 1.0 | 122 | 0.5100 | 0.7168 | 0.6536 | 0.6446 | 0.6484 |
| 0.5083 | 2.0 | 244 | 0.4999 | 0.7243 | 0.6825 | 0.7049 | 0.6887 |
| 0.4904 | 3.0 | 366 | 0.4595 | 0.7619 | 0.7120 | 0.7065 | 0.7091 |
| 0.4644 | 4.0 | 488 | 0.4287 | 0.7920 | 0.7520 | 0.7253 | 0.7358 |
| 0.4439 | 5.0 | 610 | 0.4399 | 0.7519 | 0.7127 | 0.7395 | 0.7203 |
| 0.4241 | 6.0 | 732 | 0.4027 | 0.8221 | 0.7860 | 0.7816 | 0.7837 |
| 0.4092 | 7.0 | 854 | 0.4019 | 0.8070 | 0.7674 | 0.7835 | 0.7743 |
| 0.3891 | 8.0 | 976 | 0.3805 | 0.8271 | 0.7912 | 0.7926 | 0.7919 |
| 0.3777 | 9.0 | 1098 | 0.3789 | 0.8271 | 0.7912 | 0.7926 | 0.7919 |
| 0.369 | 10.0 | 1220 | 0.3758 | 0.8396 | 0.8071 | 0.8040 | 0.8055 |
| 0.3531 | 11.0 | 1342 | 0.3805 | 0.8296 | 0.7933 | 0.8044 | 0.7984 |
| 0.3486 | 12.0 | 1464 | 0.3801 | 0.8321 | 0.7960 | 0.8112 | 0.8027 |
| 0.3472 | 13.0 | 1586 | 0.3675 | 0.8421 | 0.8098 | 0.8083 | 0.8091 |
| 0.3379 | 14.0 | 1708 | 0.3654 | 0.8371 | 0.8032 | 0.8047 | 0.8040 |
| 0.3353 | 15.0 | 1830 | 0.3703 | 0.8421 | 0.8080 | 0.8183 | 0.8127 |
| 0.3213 | 16.0 | 1952 | 0.3709 | 0.8371 | 0.8019 | 0.8147 | 0.8077 |
| 0.3214 | 17.0 | 2074 | 0.3641 | 0.8371 | 0.8024 | 0.8097 | 0.8059 |
| 0.3225 | 18.0 | 2196 | 0.3640 | 0.8371 | 0.8024 | 0.8097 | 0.8059 |
| 0.3159 | 19.0 | 2318 | 0.3649 | 0.8346 | 0.7993 | 0.8080 | 0.8034 |
| 0.3195 | 20.0 | 2440 | 0.3642 | 0.8346 | 0.7993 | 0.8080 | 0.8034 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
omersezer/TE_Instruct_L3 | omersezer | 2024-05-17T16:25:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-05-17T16:24:54Z | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
buming/ppo-LunarLander-v2 | buming | 2024-05-17T16:25:45Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-17T16:25:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.83 +/- 21.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Wellyowo/hubert-esc50-finetuned-v2 | Wellyowo | 2024-05-17T16:24:23Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"esc50",
"generated_from_trainer",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-17T13:22:26Z | ---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- audio-classification
- hubert
- esc50
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hubert-esc50-finetuned-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-esc50-finetuned-v2
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the ESC-50 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9551
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.5337 | 1.0 | 200 | 3.4929 | 0.0775 |
| 3.1679 | 2.0 | 400 | 3.1355 | 0.1675 |
| 2.8042 | 3.0 | 600 | 2.8673 | 0.2075 |
| 2.5055 | 4.0 | 800 | 2.6202 | 0.2125 |
| 2.0268 | 5.0 | 1000 | 2.3768 | 0.3375 |
| 2.1337 | 6.0 | 1200 | 2.0171 | 0.4225 |
| 1.6061 | 7.0 | 1400 | 1.7294 | 0.5075 |
| 1.5169 | 8.0 | 1600 | 1.8017 | 0.5025 |
| 1.0634 | 9.0 | 1800 | 1.5051 | 0.5475 |
| 0.9651 | 10.0 | 2000 | 1.3431 | 0.635 |
| 0.8616 | 11.0 | 2200 | 1.3417 | 0.6375 |
| 0.6799 | 12.0 | 2400 | 1.2891 | 0.63 |
| 0.445 | 13.0 | 2600 | 1.2285 | 0.6575 |
| 0.2984 | 14.0 | 2800 | 1.2008 | 0.7125 |
| 0.5947 | 15.0 | 3000 | 1.3225 | 0.71 |
| 0.4194 | 16.0 | 3200 | 1.1032 | 0.775 |
| 0.3128 | 17.0 | 3400 | 1.8309 | 0.6625 |
| 0.237 | 18.0 | 3600 | 1.3349 | 0.7325 |
| 0.1701 | 19.0 | 3800 | 1.4491 | 0.7275 |
| 0.2618 | 20.0 | 4000 | 1.4919 | 0.7525 |
| 0.1336 | 21.0 | 4200 | 1.6088 | 0.7325 |
| 0.113 | 22.0 | 4400 | 1.3687 | 0.7725 |
| 0.0757 | 23.0 | 4600 | 1.4691 | 0.7875 |
| 0.0558 | 24.0 | 4800 | 1.8059 | 0.7525 |
| 0.1442 | 25.0 | 5000 | 1.7809 | 0.7475 |
| 0.1023 | 26.0 | 5200 | 1.8423 | 0.7875 |
| 0.0075 | 27.0 | 5400 | 1.7945 | 0.79 |
| 0.0054 | 28.0 | 5600 | 1.8221 | 0.7825 |
| 0.0584 | 29.0 | 5800 | 1.7593 | 0.785 |
| 0.07 | 30.0 | 6000 | 1.8601 | 0.7925 |
| 0.0827 | 31.0 | 6200 | 1.8467 | 0.7875 |
| 0.1128 | 32.0 | 6400 | 2.1020 | 0.765 |
| 0.2679 | 33.0 | 6600 | 2.0718 | 0.775 |
| 0.0647 | 34.0 | 6800 | 1.9542 | 0.7875 |
| 0.0376 | 35.0 | 7000 | 2.1877 | 0.7675 |
| 0.0019 | 36.0 | 7200 | 2.4088 | 0.745 |
| 0.1009 | 37.0 | 7400 | 2.2295 | 0.765 |
| 0.0039 | 38.0 | 7600 | 2.0022 | 0.7825 |
| 0.0006 | 39.0 | 7800 | 2.0640 | 0.795 |
| 0.0512 | 40.0 | 8000 | 2.3373 | 0.78 |
| 0.0282 | 41.0 | 8200 | 1.9908 | 0.795 |
| 0.0113 | 42.0 | 8400 | 2.3893 | 0.775 |
| 0.035 | 43.0 | 8600 | 2.3017 | 0.7775 |
| 0.006 | 44.0 | 8800 | 2.1261 | 0.7825 |
| 0.0556 | 45.0 | 9000 | 2.3122 | 0.775 |
| 0.0003 | 46.0 | 9200 | 2.1505 | 0.79 |
| 0.0115 | 47.0 | 9400 | 2.0387 | 0.805 |
| 0.0001 | 48.0 | 9600 | 2.1915 | 0.8 |
| 0.2299 | 49.0 | 9800 | 2.6715 | 0.76 |
| 0.0017 | 50.0 | 10000 | 2.7250 | 0.755 |
| 0.2944 | 51.0 | 10200 | 2.5766 | 0.79 |
| 0.1269 | 52.0 | 10400 | 2.3590 | 0.785 |
| 0.0941 | 53.0 | 10600 | 2.9789 | 0.755 |
| 0.0477 | 54.0 | 10800 | 2.7512 | 0.75 |
| 0.2068 | 55.0 | 11000 | 2.5162 | 0.7725 |
| 0.0004 | 56.0 | 11200 | 2.4355 | 0.7525 |
| 0.0657 | 57.0 | 11400 | 2.5043 | 0.7775 |
| 0.0002 | 58.0 | 11600 | 2.4236 | 0.785 |
| 0.0133 | 59.0 | 11800 | 2.4225 | 0.78 |
| 0.0 | 60.0 | 12000 | 2.3476 | 0.79 |
| 0.0159 | 61.0 | 12200 | 2.3234 | 0.7975 |
| 0.0002 | 62.0 | 12400 | 2.3763 | 0.78 |
| 0.0626 | 63.0 | 12600 | 2.0386 | 0.835 |
| 0.0112 | 64.0 | 12800 | 2.3345 | 0.81 |
| 0.0004 | 65.0 | 13000 | 2.3710 | 0.8075 |
| 0.0714 | 66.0 | 13200 | 2.0527 | 0.82 |
| 0.0008 | 67.0 | 13400 | 2.2063 | 0.8175 |
| 0.0001 | 68.0 | 13600 | 2.5772 | 0.795 |
| 0.0001 | 69.0 | 13800 | 2.4176 | 0.7975 |
| 0.0001 | 70.0 | 14000 | 2.1132 | 0.8125 |
| 0.0017 | 71.0 | 14200 | 2.2163 | 0.8125 |
| 0.2347 | 72.0 | 14400 | 2.0444 | 0.8275 |
| 0.0 | 73.0 | 14600 | 2.3745 | 0.8275 |
| 0.0001 | 74.0 | 14800 | 2.0128 | 0.8325 |
| 0.0037 | 75.0 | 15000 | 2.0867 | 0.8375 |
| 0.0 | 76.0 | 15200 | 2.2285 | 0.825 |
| 0.0001 | 77.0 | 15400 | 2.0214 | 0.8425 |
| 0.0001 | 78.0 | 15600 | 2.4193 | 0.82 |
| 0.0002 | 79.0 | 15800 | 2.4296 | 0.815 |
| 0.1198 | 80.0 | 16000 | 2.3698 | 0.8175 |
| 0.0001 | 81.0 | 16200 | 2.3521 | 0.82 |
| 0.0 | 82.0 | 16400 | 2.1241 | 0.8325 |
| 0.0001 | 83.0 | 16600 | 2.1642 | 0.8275 |
| 0.0005 | 84.0 | 16800 | 2.0545 | 0.835 |
| 0.0 | 85.0 | 17000 | 2.0386 | 0.8475 |
| 0.0003 | 86.0 | 17200 | 2.1348 | 0.83 |
| 0.0004 | 87.0 | 17400 | 2.2024 | 0.83 |
| 0.0 | 88.0 | 17600 | 2.1521 | 0.835 |
| 0.0001 | 89.0 | 17800 | 2.2244 | 0.83 |
| 0.0 | 90.0 | 18000 | 2.1535 | 0.8325 |
| 0.0 | 91.0 | 18200 | 2.2048 | 0.835 |
| 0.1711 | 92.0 | 18400 | 2.1023 | 0.83 |
| 0.0 | 93.0 | 18600 | 2.0534 | 0.845 |
| 0.0 | 94.0 | 18800 | 2.0220 | 0.845 |
| 0.0 | 95.0 | 19000 | 2.0061 | 0.845 |
| 0.0001 | 96.0 | 19200 | 1.9270 | 0.8475 |
| 0.0001 | 97.0 | 19400 | 1.9710 | 0.84 |
| 0.0001 | 98.0 | 19600 | 1.9561 | 0.845 |
| 0.0 | 99.0 | 19800 | 1.9567 | 0.845 |
| 0.0 | 100.0 | 20000 | 1.9551 | 0.85 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mohit15/med-llava-v1.5-13b-lora | mohit15 | 2024-05-17T16:22:14Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T16:14:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
anonymous1266/MS_Models | anonymous1266 | 2024-05-17T16:19:54Z | 0 | 0 | null | [
"region:us"
] | null | 2024-04-05T20:40:36Z | These models are used as supplementary material for a paper in review. See the code base for more information. |
hjskhan/gemma-2b-fine-tuned-math | hjskhan | 2024-05-17T16:19:21Z | 155 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T16:14:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pigas/Phi-2-GPTQ-2bits-g128 | pigas | 2024-05-17T16:18:09Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-17T16:13:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lock-rr/Bahasa-4b-chat-Q4_K_M-GGUF | lock-rr | 2024-05-17T16:18:00Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"id",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-17T16:17:50Z | ---
language:
- id
license: other
tags:
- llama-cpp
- gguf-my-repo
license_name: tongyi-qianwen
---
# lock-rr/Bahasa-4b-chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`Bahasalab/Bahasa-4b-chat`](https://huggingface.co/Bahasalab/Bahasa-4b-chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Bahasalab/Bahasa-4b-chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo lock-rr/Bahasa-4b-chat-Q4_K_M-GGUF --model bahasa-4b-chat.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo lock-rr/Bahasa-4b-chat-Q4_K_M-GGUF --model bahasa-4b-chat.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bahasa-4b-chat.Q4_K_M.gguf -n 128
```
|
mohit15/med-llava-recall-v1.5-13b-lora | mohit15 | 2024-05-17T16:17:38Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T16:09:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mika5883/finetune_rugec | mika5883 | 2024-05-17T16:15:35Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:mika5883/pretrain_rugec",
"base_model:finetune:mika5883/pretrain_rugec",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-17T16:09:20Z | ---
base_model: mika5883/pretrain_rugec
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: finetune_rugec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_rugec
This model is a fine-tuned version of [mika5883/pretrain_rugec](https://huggingface.co/mika5883/pretrain_rugec) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2114
- Bleu: 60.3251
- Gen Len: 16.2364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.83229e-05
- train_batch_size: 128
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 20 | 0.2264 | 59.675 | 16.2312 |
| No log | 2.0 | 40 | 0.2114 | 60.3251 | 16.2364 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Cidewalk/autotrain-trained-slady-m38s3 | Cidewalk | 2024-05-17T16:15:15Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T15:04:18Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
XueyingJia/llama3_4_bit_mnli_openai_3_shots_generated_data_openai | XueyingJia | 2024-05-17T16:13:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T01:53:43Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** XueyingJia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FusionQuill/Llama-3-8B-Instruct-Onnx | FusionQuill | 2024-05-17T16:13:57Z | 0 | 0 | null | [
"onnx",
"license:llama3",
"region:us"
] | null | 2024-05-16T15:18:59Z | ---
license: llama3
---
Onnx 4bit version of meta-llama/Meta-Llama-3-8B used by FusionQuill.AI |
Snoopy47/CustomModel_yelp | Snoopy47 | 2024-05-17T16:11:28Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-17T16:10:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
leoben/lora_model | leoben | 2024-05-17T16:05:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T16:05:08Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** leoben
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ai-Marshal/Mistral_Sentiment_Classification_2024-05-17 | Ai-Marshal | 2024-05-17T16:02:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T15:30:21Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
model-index:
- name: Mistral_Sentiment_Classification_2024-05-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral_Sentiment_Classification_2024-05-17
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4299 | 0.4587 | 100 | 0.3342 |
| 0.3351 | 0.9174 | 200 | 0.3189 |
| 0.3218 | 1.3761 | 300 | 0.3112 |
| 0.3164 | 1.8349 | 400 | 0.3067 |
| 0.3021 | 2.2936 | 500 | 0.3035 |
| 0.2892 | 2.7523 | 600 | 0.3006 |
| 0.2825 | 3.2110 | 700 | 0.3010 |
| 0.2719 | 3.6697 | 800 | 0.2994 |
| 0.2807 | 4.1284 | 900 | 0.3002 |
| 0.2622 | 4.5872 | 1000 | 0.3003 |
### Framework versions
- PEFT 0.11.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
bhoopendrakumar/passport_10_images | bhoopendrakumar | 2024-05-17T15:57:12Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-17T15:52:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/sentiment-lora-r2a1d0.15-0 | apwic | 2024-05-17T15:55:53Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-17T15:22:42Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r2a1d0.15-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r2a1d0.15-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3672
- Accuracy: 0.8321
- Precision: 0.7961
- Recall: 0.8087
- F1: 0.8018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.563 | 1.0 | 122 | 0.5138 | 0.7243 | 0.6636 | 0.6549 | 0.6586 |
| 0.509 | 2.0 | 244 | 0.5057 | 0.7168 | 0.6763 | 0.6996 | 0.6820 |
| 0.4924 | 3.0 | 366 | 0.4708 | 0.7393 | 0.6877 | 0.6931 | 0.6901 |
| 0.468 | 4.0 | 488 | 0.4379 | 0.7845 | 0.7412 | 0.7200 | 0.7286 |
| 0.4495 | 5.0 | 610 | 0.4466 | 0.7594 | 0.7233 | 0.7548 | 0.7313 |
| 0.4334 | 6.0 | 732 | 0.4041 | 0.8271 | 0.7927 | 0.7851 | 0.7887 |
| 0.415 | 7.0 | 854 | 0.4057 | 0.7995 | 0.7590 | 0.7756 | 0.7660 |
| 0.3974 | 8.0 | 976 | 0.3852 | 0.8321 | 0.7982 | 0.7937 | 0.7959 |
| 0.3849 | 9.0 | 1098 | 0.3829 | 0.8246 | 0.7880 | 0.7909 | 0.7894 |
| 0.3771 | 10.0 | 1220 | 0.3786 | 0.8396 | 0.8065 | 0.8065 | 0.8065 |
| 0.3633 | 11.0 | 1342 | 0.3843 | 0.8296 | 0.7931 | 0.8069 | 0.7993 |
| 0.3591 | 12.0 | 1464 | 0.3833 | 0.8296 | 0.7931 | 0.8069 | 0.7993 |
| 0.354 | 13.0 | 1586 | 0.3705 | 0.8396 | 0.8065 | 0.8065 | 0.8065 |
| 0.3451 | 14.0 | 1708 | 0.3709 | 0.8371 | 0.8028 | 0.8072 | 0.8049 |
| 0.3403 | 15.0 | 1830 | 0.3733 | 0.8321 | 0.7960 | 0.8112 | 0.8027 |
| 0.3282 | 16.0 | 1952 | 0.3715 | 0.8346 | 0.7988 | 0.8155 | 0.8061 |
| 0.3286 | 17.0 | 2074 | 0.3664 | 0.8321 | 0.7965 | 0.8037 | 0.7999 |
| 0.3348 | 18.0 | 2196 | 0.3670 | 0.8271 | 0.7904 | 0.8001 | 0.7949 |
| 0.325 | 19.0 | 2318 | 0.3669 | 0.8321 | 0.7961 | 0.8087 | 0.8018 |
| 0.3266 | 20.0 | 2440 | 0.3672 | 0.8321 | 0.7961 | 0.8087 | 0.8018 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF | bartowski | 2024-05-17T15:53:39Z | 136 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B",
"base_model:merge:grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B",
"base_model:grimjim/kunoichi-lemon-royale-7B",
"base_model:merge:grimjim/kunoichi-lemon-royale-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-17T15:35:50Z | ---
base_model:
- grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
- grimjim/kunoichi-lemon-royale-7B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of kunoichi-lemon-royale-v2-32K-7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2901">b2901</a> for quantization.
Original model: https://huggingface.co/grimjim/kunoichi-lemon-royale-v2-32K-7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
<s> [INST] {prompt} [/INST]</s>
```
Note that this model does not support a System prompt.
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [kunoichi-lemon-royale-v2-32K-7B-Q8_0.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
| [kunoichi-lemon-royale-v2-32K-7B-Q6_K.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
| [kunoichi-lemon-royale-v2-32K-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, *recommended*. |
| [kunoichi-lemon-royale-v2-32K-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, *recommended*. |
| [kunoichi-lemon-royale-v2-32K-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [kunoichi-lemon-royale-v2-32K-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with more space savings, *recommended*. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ4_NL.gguf) | IQ4_NL | 4.12GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ4_XS.gguf) | IQ4_XS | 3.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [kunoichi-lemon-royale-v2-32K-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
| [kunoichi-lemon-royale-v2-32K-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ3_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ3_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [kunoichi-lemon-royale-v2-32K-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ3_XS.gguf) | IQ3_XS | 3.01GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ3_XXS.gguf) | IQ3_XXS | 2.82GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [kunoichi-lemon-royale-v2-32K-7B-Q2_K.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-Q2_K.gguf) | Q2_K | 2.71GB | Very low quality but surprisingly usable. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ2_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ2_M.gguf) | IQ2_M | 2.50GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ2_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ2_S.gguf) | IQ2_S | 2.31GB | Very low quality, uses SOTA techniques to be usable. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ2_XS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ2_XS.gguf) | IQ2_XS | 2.19GB | Very low quality, uses SOTA techniques to be usable. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ2_XXS.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ2_XXS.gguf) | IQ2_XXS | 1.99GB | Lower quality, uses SOTA techniques to be usable. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ1_M.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ1_M.gguf) | IQ1_M | 1.75GB | Extremely low quality, *not* recommended. |
| [kunoichi-lemon-royale-v2-32K-7B-IQ1_S.gguf](https://huggingface.co/bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF/blob/main/kunoichi-lemon-royale-v2-32K-7B-IQ1_S.gguf) | IQ1_S | 1.61GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF --include "kunoichi-lemon-royale-v2-32K-7B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/kunoichi-lemon-royale-v2-32K-7B-GGUF --include "kunoichi-lemon-royale-v2-32K-7B-Q8_0.gguf/*" --local-dir kunoichi-lemon-royale-v2-32K-7B-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (kunoichi-lemon-royale-v2-32K-7B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
emilykang/medmcqa_question_generation-pharmacology_lora | emilykang | 2024-05-17T15:51:35Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T15:11:20Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- generator
model-index:
- name: medmcqa_question_generation-pharmacology_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medmcqa_question_generation-pharmacology_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
Aadithyak/WHISPERtestmodel | Aadithyak | 2024-05-17T15:49:43Z | 116 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-14T13:06:39Z | ---
license: apache-2.0
---
|
RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf | RichardErkhov | 2024-05-17T15:48:13Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T14:27:34Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-v0.1-DPO - GGUF
- Model creator: https://huggingface.co/walebadr/
- Original model: https://huggingface.co/walebadr/Mistral-7B-v0.1-DPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-7B-v0.1-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral-7B-v0.1-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Mistral-7B-v0.1-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Mistral-7B-v0.1-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral-7B-v0.1-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Mistral-7B-v0.1-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral-7B-v0.1-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral-7B-v0.1-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Mistral-7B-v0.1-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral-7B-v0.1-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral-7B-v0.1-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral-7B-v0.1-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral-7B-v0.1-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral-7B-v0.1-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral-7B-v0.1-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral-7B-v0.1-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral-7B-v0.1-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral-7B-v0.1-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral-7B-v0.1-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral-7B-v0.1-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral-7B-v0.1-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q6_K.gguf) | Q6_K | 5.53GB |
| [Mistral-7B-v0.1-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/walebadr_-_Mistral-7B-v0.1-DPO-gguf/blob/main/Mistral-7B-v0.1-DPO.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
---
Mistral-7b-v0.1-DPO is a finetuned adapter from the original Mistral-7b model. In this adaptor, I am finetuning the LM head in addition to the regular modules that are normally finetuned. Below is the list of the finetuned modules:
'k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj', 'lm_head'
|
jinwoo1126/llama-3-8b-open-korean-it | jinwoo1126 | 2024-05-17T15:47:20Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T15:43:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
damgomz/ThunBERT_bs8_lr5 | damgomz | 2024-05-17T15:46:48Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"pretraining",
"fill-mask",
"en",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-15T09:46:09Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T17:46:44'
project_name: ThunBERT_bs8_lr5_emissions_tracker
run_id: 525cc3ea-c30d-41f0-83c7-26fb501d8395
duration: 199505.3919699192
emissions: 0.2088190820973
emissions_rate: 1.0466839018004345e-06
cpu_power: 42.5
gpu_power: 0.0
ram_power: 37.5
cpu_energy: 2.3552678664051694
gpu_energy: 0
ram_energy: 2.078164164704575
energy_consumed: 4.433432031109744
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 4
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 100
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 199505.3919699192 |
| Emissions (Co2eq in kg) | 0.2088190820973 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 37.5 |
| CPU energy (kWh) | 2.3552678664051694 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 2.078164164704575 |
| Consumed energy (kWh) | 4.433432031109744 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 4 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.38404787954209446 |
| Emissions (Co2eq in kg) | 0.07813961185488502 |
## Note
15 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ThunBERT_bs8_lr5 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-05 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 82627 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 7.067743 | 3.953574 |
| 0.5 | 3.522399 | 3.376064 |
| 1.0 | 3.303520 | 3.226876 |
| 1.5 | 3.154914 | 3.167486 |
| 2.0 | 3.058335 | 3.051049 |
| 2.5 | 2.983440 | 2.994546 |
| 3.0 | 2.966602 | 2.926526 |
| 3.5 | 2.846851 | 2.879127 |
| 4.0 | 2.785210 | 2.832286 |
| 4.5 | 2.718725 | 2.795912 |
| 5.0 | 2.670722 | 2.733300 |
| 5.5 | 2.628934 | 2.693741 |
| 6.0 | 2.589258 | 2.672380 |
|
Resi/finetune-donut-doctype-v2 | Resi | 2024-05-17T15:38:27Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-17T15:37:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rhma/LlamaDialo10 | Rhma | 2024-05-17T15:34:07Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T15:30:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CourtneyCC/Haxiro | CourtneyCC | 2024-05-17T15:33:43Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T15:33:43Z | ---
license: apache-2.0
---
|
vuminhtue/Bert_NER_CoNLL2003 | vuminhtue | 2024-05-17T15:29:22Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-17T15:29:04Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Bert_NER_CoNLL2003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/tuevu_smu/huggingface/runs/6j8wt2rd)
# Bert_NER_CoNLL2003
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 1.13.1
- Datasets 2.18.0
- Tokenizers 0.19.1
|
damgomz/ft_bs64_lr7_base_x4 | damgomz | 2024-05-17T15:28:44Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-17T09:22:30Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T17:28:41'
project_name: ft_bs64_lr7_base_x4_emissions_tracker
run_id: e2122246-4fcb-4eba-8efa-d204cb43712a
duration: 26475.748703956604
emissions: 0.0173198704923115
emissions_rate: 6.541786857843705e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 7.5
cpu_energy: 0.3125603431133757
gpu_energy: 0
ram_energy: 0.0551573378766576
energy_consumed: 0.3677176809900338
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 3
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 20
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 26475.748703956604 |
| Emissions (Co2eq in kg) | 0.0173198704923115 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 7.5 |
| CPU energy (kWh) | 0.3125603431133757 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0551573378766576 |
| Consumed energy (kWh) | 0.3677176809900338 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 3 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.05096581625511646 |
| Emissions (Co2eq in kg) | 0.010369668242383001 |
## Note
17 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_bs64_lr7_base_x4 |
| sequence_length | 400 |
| num_epoch | 15 |
| learning_rate | 5e-07 |
| batch_size | 64 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 81450 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.640842 | 0.587673 | 0.723122 | 0.808282 |
| 1 | 0.545254 | 0.528299 | 0.735641 | 0.806748 |
| 2 | 0.498372 | 0.501702 | 0.748895 | 0.838957 |
| 3 | 0.471714 | 0.477527 | 0.768778 | 0.829755 |
| 4 | 0.449965 | 0.455789 | 0.776878 | 0.846626 |
| 5 | 0.423347 | 0.442834 | 0.784978 | 0.884969 |
| 6 | 0.402362 | 0.414682 | 0.806333 | 0.832822 |
| 7 | 0.380589 | 0.400692 | 0.811487 | 0.855828 |
| 8 | 0.367983 | 0.393791 | 0.814433 | 0.852761 |
| 9 | 0.355760 | 0.386102 | 0.822533 | 0.825153 |
| 10 | 0.348000 | 0.381949 | 0.824006 | 0.865031 |
| 11 | 0.343452 | 0.382962 | 0.824006 | 0.875767 |
| 12 | 0.334328 | 0.381772 | 0.824006 | 0.878834 |
| 13 | 0.328141 | 0.387798 | 0.823270 | 0.892638 |
| 14 | 0.323139 | 0.377890 | 0.824742 | 0.878834 |
|
lakshankarunathilake/biomegatron-ner_model | lakshankarunathilake | 2024-05-17T15:27:46Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"megatron-bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-17T15:19:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yntec/ModernDisney | Yntec | 2024-05-17T15:22:55Z | 218 | 0 | diffusers | [
"diffusers",
"safetensors",
"3D Animation",
"Anime",
"Art",
"XpucT",
"nitrosocke",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-17T12:16:42Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- 3D Animation
- Anime
- Art
- XpucT
- nitrosocke
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
---
Use "Modern Disney" in your prompts if you want the effect.
# Modern Disney
Mo-Di-Diffusion mixed with Deliberate to create a model that falls to Deliberate when you don't use this token. The vae version has the kl-f8-anime2 one baked in. Since I released another model that mixes Mo-Di-Diffusion I feel I need to justify this one, well, check this comparison:

(Click for larger)
Neither produced a pikachu but the point is you don't need to have "person human" as a negative prompts anymore!
Samples and prompts:

(Click for larger)
Top left: cute modern disney pikachu sitting
Top right: Cartoon Pretty CUTE Girl, sitting on Overwatch, DETAILED CHIBI EYES, soaking in the rain, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus, aerial photography, trending on artstation, peter lloyd. Illustration By ROSSDRAWS and Dave Rapoza and artgerm and leyendecker and Clay
Bottom left: modern disney loli girl
Bottom right: disney movie modern man and little daughter ponytail, Santa claus. cute faces
Original pages:
https://huggingface.co/nitrosocke/mo-di-diffusion
https://huggingface.co/XpucT/Deliberate
# Recipe
- SuperMerger Weight sum Use MBW 1,0,0,0,0,0,0,1,1,1,1,1,1,0,1,1,1,1,1,1,0,0,0,0,0,0
Model A:
Deliberate
Model B:
Mo-Di-Diffusion
Output Model:
Modern Disney
Bake kl-f8-anime2.ckpt VAE:
Modern Disney VAE |
apwic/sentiment-lora-r2a1d0.1-0 | apwic | 2024-05-17T15:22:24Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-17T14:49:10Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r2a1d0.1-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r2a1d0.1-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.8471
- Precision: 0.8138
- Recall: 0.8243
- F1: 0.8187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5634 | 1.0 | 122 | 0.5108 | 0.7193 | 0.6572 | 0.6489 | 0.6524 |
| 0.5081 | 2.0 | 244 | 0.5049 | 0.7218 | 0.6829 | 0.7082 | 0.6888 |
| 0.4924 | 3.0 | 366 | 0.4667 | 0.7494 | 0.6977 | 0.6977 | 0.6977 |
| 0.4698 | 4.0 | 488 | 0.4392 | 0.7794 | 0.7349 | 0.7114 | 0.7207 |
| 0.4519 | 5.0 | 610 | 0.4548 | 0.7469 | 0.7169 | 0.7534 | 0.7226 |
| 0.4356 | 6.0 | 732 | 0.4111 | 0.8145 | 0.7770 | 0.7713 | 0.7740 |
| 0.421 | 7.0 | 854 | 0.4101 | 0.7945 | 0.7538 | 0.7721 | 0.7612 |
| 0.4039 | 8.0 | 976 | 0.3829 | 0.8296 | 0.7949 | 0.7919 | 0.7934 |
| 0.3887 | 9.0 | 1098 | 0.3800 | 0.8321 | 0.7972 | 0.7987 | 0.7979 |
| 0.3797 | 10.0 | 1220 | 0.3768 | 0.8371 | 0.8044 | 0.7997 | 0.8020 |
| 0.368 | 11.0 | 1342 | 0.3842 | 0.8221 | 0.7846 | 0.8016 | 0.7918 |
| 0.3598 | 12.0 | 1464 | 0.3778 | 0.8271 | 0.7902 | 0.8051 | 0.7968 |
| 0.3548 | 13.0 | 1586 | 0.3624 | 0.8471 | 0.8167 | 0.8118 | 0.8142 |
| 0.3469 | 14.0 | 1708 | 0.3637 | 0.8446 | 0.8120 | 0.8151 | 0.8135 |
| 0.3431 | 15.0 | 1830 | 0.3685 | 0.8396 | 0.8049 | 0.8165 | 0.8102 |
| 0.3275 | 16.0 | 1952 | 0.3664 | 0.8371 | 0.8017 | 0.8172 | 0.8086 |
| 0.3288 | 17.0 | 2074 | 0.3590 | 0.8396 | 0.8055 | 0.8115 | 0.8084 |
| 0.3335 | 18.0 | 2196 | 0.3607 | 0.8471 | 0.8138 | 0.8243 | 0.8187 |
| 0.3239 | 19.0 | 2318 | 0.3613 | 0.8446 | 0.8107 | 0.8226 | 0.8161 |
| 0.327 | 20.0 | 2440 | 0.3608 | 0.8471 | 0.8138 | 0.8243 | 0.8187 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
mharb/q-Taxi-v3 | mharb | 2024-05-17T15:20:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-17T15:20:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mharb/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
damgomz/ft_bs64_lr7_base_x2 | damgomz | 2024-05-17T15:18:51Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-16T15:09:06Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-17T17:18:48'
project_name: ft_bs64_lr7_base_x2_emissions_tracker
run_id: 0f492e82-6d6f-4e1a-913d-bcd043cc24c3
duration: 26423.72340154648
emissions: 0.0172858379899915
emissions_rate: 6.541787365583701e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 7.5
cpu_energy: 0.3119461647588344
gpu_energy: 0
ram_energy: 0.0550489731361468
energy_consumed: 0.3669951378949813
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 3
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 20
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 26423.72340154648 |
| Emissions (Co2eq in kg) | 0.0172858379899915 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 7.5 |
| CPU energy (kWh) | 0.3119461647588344 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0550489731361468 |
| Consumed energy (kWh) | 0.3669951378949813 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 3 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.050865667547976966 |
| Emissions (Co2eq in kg) | 0.010349291665605703 |
## Note
17 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_bs64_lr7_base_x2 |
| sequence_length | 400 |
| num_epoch | 15 |
| learning_rate | 5e-07 |
| batch_size | 64 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 81450 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.664386 | 0.632545 | 0.740795 | 0.837423 |
| 1 | 0.589979 | 0.557067 | 0.743741 | 0.838957 |
| 2 | 0.512979 | 0.506961 | 0.767305 | 0.808282 |
| 3 | 0.471688 | 0.479002 | 0.779823 | 0.869632 |
| 4 | 0.439011 | 0.454375 | 0.788660 | 0.803681 |
| 5 | 0.413775 | 0.434841 | 0.802651 | 0.848160 |
| 6 | 0.392609 | 0.420262 | 0.807806 | 0.842025 |
| 7 | 0.380271 | 0.409428 | 0.809278 | 0.803681 |
| 8 | 0.365458 | 0.399789 | 0.825479 | 0.861963 |
| 9 | 0.353928 | 0.391207 | 0.829161 | 0.858896 |
| 10 | 0.342954 | 0.388762 | 0.827688 | 0.863497 |
| 11 | 0.335871 | 0.389029 | 0.827688 | 0.880368 |
| 12 | 0.328735 | 0.381536 | 0.827688 | 0.863497 |
| 13 | 0.320389 | 0.374983 | 0.827688 | 0.837423 |
| 14 | 0.314211 | 0.374905 | 0.826951 | 0.826687 |
|
emilykang/medmcqa_question_generation-medicine_lora | emilykang | 2024-05-17T15:11:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T14:09:00Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- generator
model-index:
- name: medmcqa_question_generation-medicine_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medmcqa_question_generation-medicine_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
common-canvas/CommonCanvas-XL-C | common-canvas | 2024-05-17T15:11:00Z | 32 | 33 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"common-canvas",
"stable-diffusion",
"sdxl",
"en",
"dataset:common-canvas/commoncatalog-cc-by-sa",
"dataset:common-canvas/commoncatalog-cc-by",
"arxiv:2310.16825",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-04-19T09:59:48Z | ---
license: cc-by-sa-4.0
tags:
- common-canvas
- stable-diffusion
- sdxl
datasets:
- common-canvas/commoncatalog-cc-by-sa
- common-canvas/commoncatalog-cc-by
language:
- en
---
# CommonCanvas-XL-C
## Summary
CommonCanvas is a family of latent diffusion models capable of generating images from a given text prompt. The architecture is based off of Stable Diffusion XL. Different CommonCanvas models are trained exclusively on subsets of the CommonCatalog Dataset (See Data Card), a large dataset of Creative Commons licensed images with synthetic captions produced using a pre-trained BLIP-2 captioning model.
**Input:** CommonCatalog Text Captions
**Output:** CommonCatalog Images
**Architecture:** Stable Diffusion XL
**Version Number:** 0.1
The goal of this purpose is to produce a model that is competitive with Stable Diffusion XL, but to do so using an easily accessible dataset of known provenance. Doing so makes replicating the model significantly easier and provides proper attribution to all the creative commons work used to train the model. The exact training recipe of the model can be found in the paper hosted at this link. https://arxiv.org/abs/2310.16825
## Performance Limitations
CommonCanvas under-performs in several categories, including faces, general photography, and paintings (see paper, Figure 8). These datasets all originated from the Conceptual Captions dataset, which relies on web-scraped data. These web-sourced captions, while abundant, may not always align with human-generated language nuances. Transitioning to synthetic captions introduces certain performance challenges, however, the drop in performance is not as dramatic as one might assume.
## Training Dataset Limitations
The model is trained on 10 year old YFCC data and may not have modern concepts or recent events in its training corpus. Performance on this model will be worse on certain proper nouns or specific celebrities, but this is a feature not a bug. The model may not generate known artwork, individual celebrities, or specific locations due to the autogenerated nature of the caption data.
Note: The non-commercial variants of this model are explicitly not intended to be use
* It is trained on data derived from the Flickr100M dataset. The information is dated and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Associated Risks
* Text in images produced by the model will likely be difficult to read.
* The model struggles with more complex tasks that require compositional understanding
* It may not accurately generate faces or representations of specific people.
* The model primarily learned from English descriptions and may not perform as effectively in other languages.
* The autoencoder aspect of the model introduces some information loss.
* It may be possible to guide the model to generate objectionable content, i.e. nudity or other NSFW material.
## Intended Uses
* Using the model for generative AI research
* Safe deployment of models which have the potential to generate harmful content.
* Probing and understanding the limitations and biases of generative models.
* Generation of artworks and use in design and other artistic processes.
* Applications in educational or creative tools.
* Research on generative models.
## Usage
We recommend using the MosaicML Diffusion Repo to finetune / train the model: https://github.com/mosaicml/diffusion.
Example finetuning code coming soon.
### Spaces demo
Try the model demo on [Hugging Face Spaces](https://huggingface.co/spaces/common-canvas/CommonCanvas)
### Inference with 🧨 diffusers
```py
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"common-canvas/CommonCanvas-XL-C",
custom_pipeline="multimodalart/sdxl_perturbed_attention_guidance", #read more at https://huggingface.co/multimodalart/sdxl_perturbed_attention_guidance
torch_dtype=torch.float16
).to(device)
prompt = "a cat sitting in a car seat"
image = pipe(prompt, num_inference_steps=25).images[0]
```
### Inference with ComfyUI / AUTOMATIC1111
[Download safetensors ⬇️](https://huggingface.co/common-canvas/CommonCanvas-XLC/resolve/main/commoncanvas_xl_c.safetensors?download=true)
## Evaluation/Validation
We validated the model against Stability AI’s SD2 model and compared human user study
## Acknowledgements
We thank @multimodalart, @Wauplin, and @lhoestq at Hugging Face for helping us host the dataset, and model weights.
## Citation
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
``` |
Plmanwaring/ADR_Detector_Toxigen | Plmanwaring | 2024-05-17T15:09:29Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-17T15:08:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthieuzone/TETE_DE_MOINES2 | matthieuzone | 2024-05-17T15:06:20Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-17T15:02:42Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TÊTE DE MOINES cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/TETE_DE_MOINES2
<Gallery />
## Model description
These are matthieuzone/TETE_DE_MOINES2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TÊTE DE MOINES cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/TETE_DE_MOINES2/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
hjskhan/gemma-2b-fine-tuned-docbot | hjskhan | 2024-05-17T15:04:26Z | 154 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T15:01:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
minindu-liya99/Reinforce-CartPole-v1 | minindu-liya99 | 2024-05-17T15:02:22Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-04-16T14:55:50Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
akbargherbal/test_proof_of_concept_01 | akbargherbal | 2024-05-17T15:02:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-17T15:01:22Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-it-bnb-4bit
---
# Uploaded model
- **Developed by:** akbargherbal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Shadow09/myphi2-customdata-tiny-chatbot | Shadow09 | 2024-05-17T14:58:19Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-17T14:57:41Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/Phi-3-mini-4k-instruct
model-index:
- name: myphi2-customdata-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myphi2-customdata-tiny-chatbot
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.37.2
- Pytorch 2.3.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0 |
shushan-li/GLM6B | shushan-li | 2024-05-17T14:55:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T14:55:25Z | ---
license: apache-2.0
---
|
sam-2577/zephyr-support-chatbot | sam-2577 | 2024-05-17T14:53:08Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2024-05-17T14:17:42Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-support-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
caesium94/models_colorist-v1-3e-5 | caesium94 | 2024-05-17T14:52:22Z | 152 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T14:50:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthieuzone/SAINT-NECTAIRE2 | matthieuzone | 2024-05-17T14:50:42Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-17T14:47:02Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of SAINT-NECTAIRE cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/SAINT-NECTAIRE2
<Gallery />
## Model description
These are matthieuzone/SAINT-NECTAIRE2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of SAINT-NECTAIRE cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/SAINT-NECTAIRE2/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
FARIQ22/Lunar-landerr-v2 | FARIQ22 | 2024-05-17T14:50:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-17T13:58:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.80 +/- 17.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
caesium94/models-colorist-3e-5 | caesium94 | 2024-05-17T14:49:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T13:42:53Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: models-colorist-3e-5
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models-colorist-3e-5
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
- bnb_4bit_quant_storage: uint8
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.40.2
- Pytorch 2.1.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Shadow09/myphi2-tiny-chatbot | Shadow09 | 2024-05-17T14:49:13Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-17T14:44:13Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/Phi-3-mini-4k-instruct
model-index:
- name: myphi2-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myphi2-tiny-chatbot
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.37.2
- Pytorch 2.3.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0 |
brugmark/all-MiniLM-L6-v2-personal-project-default-2024-05-17 | brugmark | 2024-05-17T14:47:50Z | 127 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-17T12:00:02Z | ---
license: apache-2.0
base_model: sentence-transformers/all-MiniLM-L6-v2
tags:
- generated_from_trainer
model-index:
- name: all-MiniLM-L6-v2-personal-project-default-2024-05-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-MiniLM-L6-v2-personal-project-default-2024-05-17
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 10.8319
- eval_runtime: 1.8704
- eval_samples_per_second: 6.416
- eval_steps_per_second: 1.604
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
matthieuzone/SAINT-_FELICIEN2 | matthieuzone | 2024-05-17T14:46:46Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-17T14:43:03Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of SAINT- FÉLICIEN cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/SAINT-_FELICIEN2
<Gallery />
## Model description
These are matthieuzone/SAINT-_FELICIEN2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of SAINT- FÉLICIEN cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/SAINT-_FELICIEN2/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
52100176-NguyenTrongDat/nlp-vietnamese | 52100176-NguyenTrongDat | 2024-05-17T14:45:21Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"vietnamese-model",
"generated_from_trainer",
"base_model:vinai/bartpho-syllable",
"base_model:finetune:vinai/bartpho-syllable",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-16T14:49:52Z | ---
base_model: vinai/bartpho-syllable
tags:
- vietnamese-model
- generated_from_trainer
metrics:
- sacrebleu
model-index:
- name: nlp-vietnamese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nlp-vietnamese
This model is a fine-tuned version of [vinai/bartpho-syllable](https://huggingface.co/vinai/bartpho-syllable) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0590
- Sacrebleu: 21.1408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| No log | 1.0 | 166 | 0.2705 | 11.9103 |
| No log | 2.0 | 332 | 0.0998 | 18.3922 |
| No log | 3.0 | 498 | 0.0668 | 20.3883 |
| No log | 4.0 | 664 | 0.0611 | 20.8298 |
| No log | 5.0 | 830 | 0.0590 | 21.1408 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
M4-ai/Orca-2.0-Tau-1.8B | M4-ai | 2024-05-17T14:41:42Z | 526 | 9 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:m-a-p/Code-Feedback",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/airoboros-3.2",
"dataset:microsoft/orca-math-word-problems-200k",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-11T08:54:34Z | ---
language:
- en
license: other
library_name: transformers
datasets:
- Open-Orca/SlimOrca
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- camel-ai/math
- camel-ai/physics
- camel-ai/biology
- camel-ai/chemistry
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
max_new_tokens: 250
repetition_penalty: 1.1
model-index:
- name: Orca-2.0-Tau-1.8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 37.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 61.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.1
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/Orca-2.0-Tau-1.8B
name: Open LLM Leaderboard
---
# Orca-2.0-Tau-1.8B
<!-- Provide a quick summary of what the model is/does. -->
We fine-tuned tau-1.8B on a high quality mix for general-purpose assistants. A DPO version of this will be released soon. We use the ChatML prompt format.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants.
- **Developed by:** M4-ai
- **Language(s) (NLP):** English and maybe Chinese
- **License:** tongyi-qianwen license
- **Finetuned from model:** [tau-1.8B](https://huggingface.co/M4-ai/tau-1.8B)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
General purpose assistant, question answering, chain-of-thought, etc..
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## Evaluation
Coming soon
## Training Details
### Training Data
- Open-Orca/SlimOrca
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- camel-ai/math
- camel-ai/physics
- camel-ai/biology
- camel-ai/chemistry
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
## Evaluations
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------------------------------|-------|------|-----:|--------|-----:|---|-----:|
|agieval_nous |N/A |none | 0|acc |0.2537|± |0.0086|
| | |none | 0|acc_norm|0.2474|± |0.0085|
| - agieval_aqua_rat | 1|none | 0|acc |0.2283|± |0.0264|
| | |none | 0|acc_norm|0.2441|± |0.0270|
| - agieval_logiqa_en | 1|none | 0|acc |0.2750|± |0.0175|
| | |none | 0|acc_norm|0.3164|± |0.0182|
| - agieval_lsat_ar | 1|none | 0|acc |0.2087|± |0.0269|
| | |none | 0|acc_norm|0.1739|± |0.0250|
| - agieval_lsat_lr | 1|none | 0|acc |0.1843|± |0.0172|
| | |none | 0|acc_norm|0.2353|± |0.0188|
| - agieval_lsat_rc | 1|none | 0|acc |0.2602|± |0.0268|
| | |none | 0|acc_norm|0.1784|± |0.0234|
| - agieval_sat_en | 1|none | 0|acc |0.3544|± |0.0334|
| | |none | 0|acc_norm|0.2961|± |0.0319|
| - agieval_sat_en_without_passage| 1|none | 0|acc |0.3107|± |0.0323|
| | |none | 0|acc_norm|0.2282|± |0.0293|
| - agieval_sat_math | 1|none | 0|acc |0.2727|± |0.0301|
| | |none | 0|acc_norm|0.2091|± |0.0275|
|truthfulqa_mc2 | 2|none | 0|acc |0.3923|± |0.0139|
#### Training Hyperparameters
- **Training regime:** bf16 non-mixed precision
## Technical Specifications
#### Hardware
We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_M4-ai__Orca-2.0-Tau-1.8B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.20|
|AI2 Reasoning Challenge (25-Shot)|37.12|
|HellaSwag (10-Shot) |61.13|
|MMLU (5-Shot) |45.27|
|TruthfulQA (0-shot) |39.10|
|Winogrande (5-shot) |59.59|
|GSM8k (5-shot) |28.96|
|
teasan/Aurorique | teasan | 2024-05-17T14:40:09Z | 0 | 1 | diffusers | [
"diffusers",
"anime",
"art",
"stable-diffusion",
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-17T12:35:52Z | ---
license: creativeml-openrail-m
language:
- ja
tags:
- anime
- art
- stable-diffusion
library_name: diffusers
---

# Auroriqueについて
## 概要
Agelesnateの特徴を受け継ぐ「線」が粗いタッチが特徴の弟分みたいなモデルです。
元々AgelesnateのV4予定だったモデルなのもあるので、高コントラスト+アニメ調~イラストの間ぐらいの出力になります。
人物は切れ長の美人系が多いかと思います。
## CHANGE LOG
- AuroriqueV1の追加
## 使い方
モデルをcloneもしくはDLした後、以下に格納してください。
```
webui\models\Stable-diffusion\
```
## 推奨設定(作者の設定)
<details>
<summary>AuroriqueV1</summary>
<div>
- Steps: 50
- Sampler: DPM++ 2M Karras
- CFG scale: 10
- Denoising strength: 0.55
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 10
- Hires upscaler: R-ESRGAN 4x+ or R-ESRGAN 4x+ Anime6B
- VAE:mse840000_klf8anime_klf8anime2
</div>
</details>
## 推奨NP
<details>
<summary>AuroriqueV1</summary>
<div>
```
aid210, [(FastNegativeV2:1.35)::0.6], (negative_hand-neg:1.5), [:(badhandv4:1.25):0.55], [:(bad-hands-5:1):0.6], (worst quality, bad quality:1.4), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger, Incomplete hand:1.5), monochrome, text, nsfw, (blush:1.2), (embarrassed:1.2)
```
</div>
</details>
## 作例
<summary>AuroriqueV1</summary>

```
beautiful person, long hair, blond hair, saintly woman,
sacred garment, seraph, seraph six wing,
cathedral, kaleidoscope,
light effects, divine effects, feather effects,
```

```
(close view:0.8),
beautiful person, solo, long braided hair, rose gold hair, gold eye,
shining sky, vast world, gazing, awe-inspiring expression, distant horizon, clouds, high hill, natural beauty, inspiration, night sky, Shining Stars,
```

```
beautiful person, solo, long hair with curls at the ends, mint green hair, red eye,
(smile:0.6),
(all over flower garden:1.4), (Flower Effects:1.2), (Floral Background:1.2), (Background filled with flowers:1.4), (Flashy background:1.1),
```
# 免責事項
- 本モデルを使用して作成された画像に関しては、個々の利用者に委ねておりますので、生成された画像に関する如何なる問題や係争について、モデル製作者は一切の責任を負いません。
- 本モデルはアダルトコンテンツを目的とした用途を想定しておりません。成人向けコンテンツを生成し、発生した問題についてはモデル製作者は一切の責任を負いません。
- ライセンスに関して問題が発生した場合は、本モデルを予告なく削除させて頂く可能性があります。ご了承ください。
- 犯罪への利用や医療用などの専門的な用途への使用は禁止されております。ライセンス不履行による過失については、モデル製作者は一切の責任を負いません。
---
# Stable Diffusionのライセンスについて
- このモデルはオープンアクセスで誰でも利用可能であり、CreativeML OpenRAIL-Mライセンスでさらに権利と使用方法が規定されています。
- CreativeML OpenRAILライセンスでは、次のように規定されています。
1. このモデルを使用して、違法または有害な出力やコンテンツを意図的に作成したり、共有したりすることはできません。
2. 作者はあなたが生成した出力に対していかなる権利も主張しません。あなたはそれらを自由に使用することができますが、ライセンスで定められた規定を守ってください。利用は自己責任でお願いします。
3. あなたはウェイトを再配布し、モデルを商業的またはサービスとして使用することができます。その場合、ライセンスにあるものと同じ使用制限を含め、CreativeML OpenRAIL-Mのコピーをあなたのすべてのユーザーに共有しなければならないことに注意してください(ライセンスを完全にかつ注意深く読んでください)。
- (ライセンスの全文: [https://huggingface.co/spaces/CompVis/stable-diffusion-license](https://huggingface.co/spaces/CompVis/stable-diffusion-license))
---
# 作者について
x(Twitter)<a href="https://x.com/wims_Tea" target="_blank"> https://x.com/wims_Tea</a>
--- |
Khallef/my_awesome_mind_model | Khallef | 2024-05-17T14:31:23Z | 162 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-17T07:40:22Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.061946902654867256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6459
- Accuracy: 0.0619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6352 | 0.0708 |
| No log | 1.8667 | 7 | 2.6428 | 0.0885 |
| 2.6331 | 2.9333 | 11 | 2.6425 | 0.0796 |
| 2.6331 | 4.0 | 15 | 2.6437 | 0.0531 |
| 2.6331 | 4.8 | 18 | 2.6432 | 0.0619 |
| 2.6238 | 5.8667 | 22 | 2.6453 | 0.0619 |
| 2.6238 | 6.9333 | 26 | 2.6460 | 0.0619 |
| 2.6214 | 8.0 | 30 | 2.6459 | 0.0619 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tomaszki/llama-23-a | tomaszki | 2024-05-17T14:29:34Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T14:25:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
richie-ghost/unsloth-Yi-1-5-9B-Chat-quantized_merge_4Bit | richie-ghost | 2024-05-17T14:28:44Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:01-ai/Yi-1.5-9B-Chat",
"base_model:finetune:01-ai/Yi-1.5-9B-Chat",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T14:16:46Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: 01-ai/Yi-1.5-9B-Chat
---
# Uploaded model
- **Developed by:** richie-ghost
- **License:** apache-2.0
- **Finetuned from model :** 01-ai/Yi-1.5-9B-Chat
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits