modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 462
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 12:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
erixhug/swin-base-patch4-window7-224-finetuned-lora-scenes | erixhug | 2023-12-04T03:50:09Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/swin-base-patch4-window7-224",
"base_model:adapter:microsoft/swin-base-patch4-window7-224",
"region:us"
] | null | 2023-12-04T03:13:52Z | ---
library_name: peft
base_model: microsoft/swin-base-patch4-window7-224
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
deepghs/anime_style_ages | deepghs | 2023-12-04T03:49:57Z | 0 | 4 | null | [
"onnx",
"art",
"image-classification",
"dataset:deepghs/anime_style_ages",
"license:openrail",
"region:us"
] | image-classification | 2023-12-02T22:33:38Z | ---
license: openrail
metrics:
- accuracy
pipeline_tag: image-classification
tags:
- art
datasets:
- deepghs/anime_style_ages
---
| Name | FLOPS | Params | Accuracy | AUC | Confusion | Labels |
|:-------------------:|:-------:|:--------:|:----------:|:------:|:-------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------:|
| caformer_s36_v0 | 22.10G | 37.22M | 71.03% | 0.9271 | [confusion](https://huggingface.co/deepghs/anime_style_ages/blob/main/caformer_s36_v0/plot_confusion.png) | `1970s-`, `1980s`, `1990s`, `2000s`, `2010s`, `2015s`, `2020s` |
| mobilenetv3_v0_dist | 0.63G | 4.18M | 65.74% | 0.9053 | [confusion](https://huggingface.co/deepghs/anime_style_ages/blob/main/mobilenetv3_v0_dist/plot_confusion.png) | `1970s-`, `1980s`, `1990s`, `2000s`, `2010s`, `2015s`, `2020s` |
|
sglasher/van-gogh-stable-diffusion | sglasher | 2023-12-04T03:47:21Z | 12 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-04T03:12:35Z | ---
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
--- |
austin/medication-single-t5 | austin | 2023-12-04T03:44:38Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/t5-efficient-small",
"base_model:finetune:google/t5-efficient-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-04T02:50:14Z | ---
license: apache-2.0
base_model: google/t5-efficient-small
tags:
- generated_from_trainer
model-index:
- name: medication-single-t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medication-single-t5
This model is a fine-tuned version of [google/t5-efficient-small](https://huggingface.co/google/t5-efficient-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.004
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5257 | 0.08 | 100 | 0.2084 |
| 0.1412 | 0.16 | 200 | 0.0880 |
| 0.0902 | 0.23 | 300 | 0.0543 |
| 0.0791 | 0.31 | 400 | 0.0456 |
| 0.072 | 0.39 | 500 | 0.0392 |
| 0.0567 | 0.47 | 600 | 0.0349 |
| 0.0507 | 0.55 | 700 | 0.0312 |
| 0.0493 | 0.63 | 800 | 0.0285 |
| 0.041 | 0.7 | 900 | 0.0246 |
| 0.0423 | 0.78 | 1000 | 0.0255 |
| 0.0382 | 0.86 | 1100 | 0.0247 |
| 0.0375 | 0.94 | 1200 | 0.0217 |
| 0.0298 | 1.02 | 1300 | 0.0211 |
| 0.0327 | 1.09 | 1400 | 0.0198 |
| 0.0272 | 1.17 | 1500 | 0.0195 |
| 0.0301 | 1.25 | 1600 | 0.0183 |
| 0.0259 | 1.33 | 1700 | 0.0179 |
| 0.0273 | 1.41 | 1800 | 0.0164 |
| 0.0244 | 1.49 | 1900 | 0.0163 |
| 0.0222 | 1.56 | 2000 | 0.0161 |
| 0.0214 | 1.64 | 2100 | 0.0158 |
| 0.0199 | 1.72 | 2200 | 0.0146 |
| 0.0202 | 1.8 | 2300 | 0.0141 |
| 0.0214 | 1.88 | 2400 | 0.0135 |
| 0.018 | 1.95 | 2500 | 0.0134 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.7
- Tokenizers 0.14.1
|
Asheron/SoccerTwosWSL1 | Asheron | 2023-12-04T03:43:38Z | 0 | 0 | ml-agents | [
"ml-agents",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2023-12-04T03:43:38Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Asheron/SoccerTwosWSL1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
ThuyNT03/KLTN_COQE_viT5_OSAPL_v2 | ThuyNT03 | 2023-12-04T03:42:12Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T22:24:38Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_OSAPL_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_OSAPL_v2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
Puluming/AISquare-Instruct-llama2-koen-13b-v0.9.15 | Puluming | 2023-12-04T03:23:23Z | 2,250 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-04T03:13:20Z | ---
license: cc-by-nc-sa-4.0
---
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow19 | FounderOfHuggingface | 2023-12-04T03:20:28Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T03:20:26Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
stillercity/ppo-LunarLander-v2 | stillercity | 2023-12-04T03:19:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-04T03:19:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.61 +/- 27.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vkorotchenko/llama-2-7b-fine-tuned-for-cdt-extraction-1 | vkorotchenko | 2023-12-04T03:11:00Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-12-04T03:10:55Z | ---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0 |
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow16 | FounderOfHuggingface | 2023-12-04T03:03:23Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T03:03:20Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
annabellehuether/topic-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd | annabellehuether | 2023-12-04T02:58:10Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-04T02:20:35Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8509
- Accuracy: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2074 | 1.0 | 660 | 0.8971 | 0.7203 |
| 0.7281 | 2.0 | 1320 | 0.8299 | 0.7406 |
| 0.5553 | 3.0 | 1980 | 0.8509 | 0.7458 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
annabellehuether/topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd | annabellehuether | 2023-12-04T02:57:20Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-04T01:54:26Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9095
- Accuracy: 0.7392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3067 | 1.0 | 660 | 0.9220 | 0.7103 |
| 0.8105 | 2.0 | 1320 | 0.8366 | 0.7384 |
| 0.6656 | 3.0 | 1980 | 0.8202 | 0.7425 |
| 0.4105 | 4.0 | 2640 | 0.8823 | 0.7384 |
| 0.3359 | 5.0 | 3300 | 0.9095 | 0.7392 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
gianyrox/Test1DreamBoothWithMorePicsSteps200 | gianyrox | 2023-12-04T02:52:06Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-04T02:42:40Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of a Dr Seuss picture
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - gianyrox/Test1DreamBoothWithMorePicsSteps200
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of a Dr Seuss picture using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow15 | FounderOfHuggingface | 2023-12-04T02:51:47Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T02:51:44Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow14 | FounderOfHuggingface | 2023-12-04T02:40:11Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T02:40:09Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
hkivancoral/smids_1x_deit_small_rms_00001_fold5 | hkivancoral | 2023-12-04T02:29:18Z | 21 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-04T01:58:01Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_small_rms_00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_small_rms_00001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9281
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3852 | 1.0 | 75 | 0.3081 | 0.87 |
| 0.2965 | 2.0 | 150 | 0.3016 | 0.8733 |
| 0.1467 | 3.0 | 225 | 0.3200 | 0.8783 |
| 0.1384 | 4.0 | 300 | 0.3262 | 0.8833 |
| 0.0702 | 5.0 | 375 | 0.3415 | 0.8817 |
| 0.0486 | 6.0 | 450 | 0.4818 | 0.8817 |
| 0.0342 | 7.0 | 525 | 0.4838 | 0.8817 |
| 0.0455 | 8.0 | 600 | 0.6047 | 0.8717 |
| 0.0096 | 9.0 | 675 | 0.5775 | 0.8817 |
| 0.028 | 10.0 | 750 | 0.6719 | 0.875 |
| 0.0419 | 11.0 | 825 | 0.6284 | 0.8833 |
| 0.0004 | 12.0 | 900 | 0.6384 | 0.8817 |
| 0.0259 | 13.0 | 975 | 0.6301 | 0.875 |
| 0.03 | 14.0 | 1050 | 0.6619 | 0.8733 |
| 0.0082 | 15.0 | 1125 | 0.8292 | 0.8667 |
| 0.0001 | 16.0 | 1200 | 0.7120 | 0.88 |
| 0.005 | 17.0 | 1275 | 0.7140 | 0.8867 |
| 0.028 | 18.0 | 1350 | 0.8747 | 0.865 |
| 0.0095 | 19.0 | 1425 | 0.8049 | 0.8767 |
| 0.0001 | 20.0 | 1500 | 0.7748 | 0.8767 |
| 0.0085 | 21.0 | 1575 | 0.7202 | 0.885 |
| 0.0152 | 22.0 | 1650 | 0.8388 | 0.875 |
| 0.0057 | 23.0 | 1725 | 0.8400 | 0.8733 |
| 0.0001 | 24.0 | 1800 | 0.8934 | 0.8717 |
| 0.0082 | 25.0 | 1875 | 0.8430 | 0.8783 |
| 0.0001 | 26.0 | 1950 | 0.8852 | 0.8783 |
| 0.008 | 27.0 | 2025 | 0.8664 | 0.8767 |
| 0.0113 | 28.0 | 2100 | 0.8872 | 0.88 |
| 0.0078 | 29.0 | 2175 | 0.8576 | 0.8817 |
| 0.0049 | 30.0 | 2250 | 0.8872 | 0.88 |
| 0.0 | 31.0 | 2325 | 0.9217 | 0.8733 |
| 0.0 | 32.0 | 2400 | 0.8681 | 0.8833 |
| 0.0081 | 33.0 | 2475 | 0.9201 | 0.8783 |
| 0.0 | 34.0 | 2550 | 0.9023 | 0.8767 |
| 0.0058 | 35.0 | 2625 | 0.9043 | 0.8767 |
| 0.0 | 36.0 | 2700 | 0.9027 | 0.88 |
| 0.0029 | 37.0 | 2775 | 0.9082 | 0.88 |
| 0.0 | 38.0 | 2850 | 0.9260 | 0.8767 |
| 0.0 | 39.0 | 2925 | 0.9311 | 0.8783 |
| 0.0 | 40.0 | 3000 | 0.9195 | 0.8767 |
| 0.0028 | 41.0 | 3075 | 0.9229 | 0.8767 |
| 0.0 | 42.0 | 3150 | 0.9218 | 0.8783 |
| 0.0075 | 43.0 | 3225 | 0.9281 | 0.8767 |
| 0.0 | 44.0 | 3300 | 0.9291 | 0.8767 |
| 0.0025 | 45.0 | 3375 | 0.9268 | 0.8783 |
| 0.0 | 46.0 | 3450 | 0.9285 | 0.88 |
| 0.0049 | 47.0 | 3525 | 0.9282 | 0.88 |
| 0.0048 | 48.0 | 3600 | 0.9283 | 0.88 |
| 0.0 | 49.0 | 3675 | 0.9284 | 0.88 |
| 0.0043 | 50.0 | 3750 | 0.9281 | 0.88 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
annabellehuether/topic-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd | annabellehuether | 2023-12-04T02:26:32Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-04T01:48:22Z | ---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7220
- Accuracy: 0.7792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1483 | 1.0 | 660 | 0.7968 | 0.7555 |
| 0.7022 | 2.0 | 1320 | 0.7341 | 0.7770 |
| 0.5851 | 3.0 | 1980 | 0.7220 | 0.7792 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
athirdpath/BigMistral-11b-GLUED | athirdpath | 2023-12-04T02:17:21Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-04T01:25:31Z | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
<p align="center"><font size="7"> <b>Okay, here we fuckin' go.</b> </font></p>
<p align="center"><font size="5"> <b>Time to fire up the ol' dare_ties pod.</b></font></p>
<p align="center"><img src="https://iili.io/JzixYiP.png"/>
<p align="center"><font size="6"><b><a href="https://iili.io/Jzix7WB.png">NSFW - Erotic(?) Writing Example - NSFW</font></a></b></p>
<p align="center"><font size="3"> <b>(That's not what it's finetuned for, okay? He's a grower.)</b></font></p>
### Dataset
The 11b glue consists of:
- The entirety of HF No Robots.
- The entirety of TinyPixel/orca-mini
- Enough of the GPT-4 generated Alpaca dataset (randomly chosen) to make it a roughly even three-way split.
JSONL file of dataset available as a repo. |
austin/medication-lists | austin | 2023-12-04T02:13:05Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-11-10T04:00:09Z | ---
tags:
- generated_from_trainer
model-index:
- name: medication-lists
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medication-lists
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.004
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2309 | 0.15 | 400 | 0.1886 |
| 0.151 | 0.3 | 800 | 0.1260 |
| 0.1061 | 0.45 | 1200 | 0.0852 |
| 0.0773 | 0.6 | 1600 | 0.0610 |
| 0.0693 | 0.75 | 2000 | 0.0498 |
| 0.0505 | 0.9 | 2400 | 0.0428 |
| 0.0428 | 1.05 | 2800 | 0.0387 |
| 0.0343 | 1.2 | 3200 | 0.0324 |
| 0.0289 | 1.35 | 3600 | 0.0299 |
| 0.0281 | 1.5 | 4000 | 0.0265 |
| 0.0251 | 1.65 | 4400 | 0.0250 |
| 0.0208 | 1.8 | 4800 | 0.0236 |
| 0.021 | 1.95 | 5200 | 0.0228 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.7
- Tokenizers 0.14.1
|
sametayhan/ppo-SnowballTarget | sametayhan | 2023-12-04T02:12:57Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-11-26T22:18:15Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sametayhan/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
hkivancoral/smids_1x_deit_small_rms_00001_fold4 | hkivancoral | 2023-12-04T01:55:30Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-04T01:24:21Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_small_rms_00001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_small_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2283
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3693 | 1.0 | 75 | 0.4169 | 0.8367 |
| 0.25 | 2.0 | 150 | 0.3480 | 0.86 |
| 0.1826 | 3.0 | 225 | 0.3907 | 0.8517 |
| 0.103 | 4.0 | 300 | 0.4268 | 0.8533 |
| 0.0588 | 5.0 | 375 | 0.4745 | 0.8517 |
| 0.0211 | 6.0 | 450 | 0.5873 | 0.86 |
| 0.0762 | 7.0 | 525 | 0.6785 | 0.8567 |
| 0.0033 | 8.0 | 600 | 0.6768 | 0.8533 |
| 0.0377 | 9.0 | 675 | 0.7784 | 0.855 |
| 0.0107 | 10.0 | 750 | 0.8289 | 0.8467 |
| 0.0009 | 11.0 | 825 | 0.8979 | 0.845 |
| 0.0002 | 12.0 | 900 | 0.8647 | 0.8617 |
| 0.0003 | 13.0 | 975 | 0.8591 | 0.8583 |
| 0.0077 | 14.0 | 1050 | 0.9903 | 0.8483 |
| 0.0002 | 15.0 | 1125 | 0.9262 | 0.86 |
| 0.0075 | 16.0 | 1200 | 1.1297 | 0.8283 |
| 0.0005 | 17.0 | 1275 | 0.9421 | 0.86 |
| 0.0146 | 18.0 | 1350 | 0.8922 | 0.86 |
| 0.0001 | 19.0 | 1425 | 0.9244 | 0.8683 |
| 0.0001 | 20.0 | 1500 | 0.9926 | 0.8683 |
| 0.003 | 21.0 | 1575 | 0.9538 | 0.8633 |
| 0.0001 | 22.0 | 1650 | 0.9796 | 0.8633 |
| 0.0 | 23.0 | 1725 | 0.9957 | 0.865 |
| 0.0079 | 24.0 | 1800 | 0.9969 | 0.8667 |
| 0.0074 | 25.0 | 1875 | 1.0816 | 0.86 |
| 0.0 | 26.0 | 1950 | 1.1025 | 0.8617 |
| 0.0 | 27.0 | 2025 | 1.1525 | 0.8467 |
| 0.0057 | 28.0 | 2100 | 1.1210 | 0.855 |
| 0.0181 | 29.0 | 2175 | 1.1276 | 0.86 |
| 0.0 | 30.0 | 2250 | 1.1208 | 0.8617 |
| 0.0 | 31.0 | 2325 | 1.1193 | 0.865 |
| 0.0 | 32.0 | 2400 | 1.1408 | 0.8617 |
| 0.0 | 33.0 | 2475 | 1.1431 | 0.8633 |
| 0.0 | 34.0 | 2550 | 1.1491 | 0.86 |
| 0.0 | 35.0 | 2625 | 1.1589 | 0.8617 |
| 0.0 | 36.0 | 2700 | 1.1620 | 0.8617 |
| 0.0031 | 37.0 | 2775 | 1.1838 | 0.8633 |
| 0.0 | 38.0 | 2850 | 1.1840 | 0.8633 |
| 0.0 | 39.0 | 2925 | 1.1861 | 0.8617 |
| 0.0 | 40.0 | 3000 | 1.2058 | 0.8633 |
| 0.0028 | 41.0 | 3075 | 1.1981 | 0.865 |
| 0.0 | 42.0 | 3150 | 1.2026 | 0.8617 |
| 0.0 | 43.0 | 3225 | 1.2159 | 0.86 |
| 0.0 | 44.0 | 3300 | 1.2159 | 0.86 |
| 0.0 | 45.0 | 3375 | 1.2189 | 0.86 |
| 0.0 | 46.0 | 3450 | 1.2225 | 0.86 |
| 0.0 | 47.0 | 3525 | 1.2244 | 0.86 |
| 0.0 | 48.0 | 3600 | 1.2263 | 0.86 |
| 0.0 | 49.0 | 3675 | 1.2278 | 0.86 |
| 0.0 | 50.0 | 3750 | 1.2283 | 0.86 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
DownwardSpiral33/hands_palms_classifier | DownwardSpiral33 | 2023-12-04T01:54:39Z | 5 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-03T14:58:25Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: DownwardSpiral33/hands_palms_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DownwardSpiral33/hands_palms_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4367
- Validation Loss: 0.7459
- Train Accuracy: 0.5806
- Epoch: 38
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 17400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6873 | 0.6761 | 0.6129 | 0 |
| 0.6720 | 0.6625 | 0.6452 | 1 |
| 0.6638 | 0.6577 | 0.6452 | 2 |
| 0.6634 | 0.6547 | 0.6774 | 3 |
| 0.6547 | 0.6507 | 0.6774 | 4 |
| 0.6556 | 0.6423 | 0.6774 | 5 |
| 0.6433 | 0.6346 | 0.6774 | 6 |
| 0.6394 | 0.6293 | 0.7097 | 7 |
| 0.6344 | 0.6239 | 0.7419 | 8 |
| 0.6205 | 0.6206 | 0.7742 | 9 |
| 0.6047 | 0.6115 | 0.7097 | 10 |
| 0.6163 | 0.5970 | 0.7419 | 11 |
| 0.6022 | 0.6069 | 0.7097 | 12 |
| 0.5958 | 0.6009 | 0.7419 | 13 |
| 0.5789 | 0.5971 | 0.6774 | 14 |
| 0.5758 | 0.5962 | 0.6774 | 15 |
| 0.5662 | 0.5976 | 0.6774 | 16 |
| 0.5579 | 0.5926 | 0.6774 | 17 |
| 0.5577 | 0.5811 | 0.6452 | 18 |
| 0.5474 | 0.5880 | 0.6452 | 19 |
| 0.5249 | 0.5921 | 0.6774 | 20 |
| 0.5412 | 0.6075 | 0.6774 | 21 |
| 0.5154 | 0.6266 | 0.7097 | 22 |
| 0.5199 | 0.6063 | 0.6129 | 23 |
| 0.5150 | 0.6054 | 0.5806 | 24 |
| 0.5199 | 0.6107 | 0.6774 | 25 |
| 0.4823 | 0.5959 | 0.6129 | 26 |
| 0.4800 | 0.6581 | 0.6452 | 27 |
| 0.4732 | 0.6620 | 0.6129 | 28 |
| 0.4766 | 0.6284 | 0.6129 | 29 |
| 0.4889 | 0.6978 | 0.5806 | 30 |
| 0.4530 | 0.6636 | 0.5806 | 31 |
| 0.4320 | 0.6348 | 0.6129 | 32 |
| 0.4704 | 0.6326 | 0.6774 | 33 |
| 0.4487 | 0.6937 | 0.6774 | 34 |
| 0.4382 | 0.6423 | 0.5806 | 35 |
| 0.4035 | 0.6926 | 0.5806 | 36 |
| 0.4330 | 0.7225 | 0.5484 | 37 |
| 0.4367 | 0.7459 | 0.5806 | 38 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
annabellehuether/topic-legal-bert-base-uncased-supreme-court-16batch_3epoch_2e5lr_01wd | annabellehuether | 2023-12-04T01:52:04Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-04T01:12:28Z | ---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-legal-bert-base-uncased-supreme-court-16batch_3epoch_2e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-legal-bert-base-uncased-supreme-court-16batch_3epoch_2e5lr_01wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7456
- Accuracy: 0.7784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8618 | 1.0 | 1319 | 0.7770 | 0.7625 |
| 0.5796 | 2.0 | 2638 | 0.7247 | 0.7821 |
| 0.4043 | 3.0 | 3957 | 0.7456 | 0.7784 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
platzi/platzi-distilroberta-base-mrpc-glue-keith-alec | platzi | 2023-12-04T01:45:36Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-04T01:40:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-keith-alec
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8259803921568627
- name: F1
type: f1
value: 0.8672897196261682
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-keith-alec
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5994
- Accuracy: 0.8260
- F1: 0.8673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.529 | 1.09 | 500 | 0.5558 | 0.8039 | 0.8561 |
| 0.3585 | 2.18 | 1000 | 0.5994 | 0.8260 | 0.8673 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
VitaliiVrublevskyi/bert-large-cased-finetuned-mrpc | VitaliiVrublevskyi | 2023-12-04T01:42:00Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T16:02:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-large-cased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8774509803921569
- name: F1
type: f1
value: 0.9134948096885814
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-mrpc
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4358
- Accuracy: 0.8775
- F1: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 26
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 115 | 0.4797 | 0.7966 | 0.8614 |
| No log | 2.0 | 230 | 0.4097 | 0.8358 | 0.8822 |
| No log | 3.0 | 345 | 0.3815 | 0.8529 | 0.8976 |
| No log | 4.0 | 460 | 0.3961 | 0.8652 | 0.9050 |
| 0.3944 | 5.0 | 575 | 0.4358 | 0.8775 | 0.9135 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
PhaniRajT/mistral-finetuned-samsum | PhaniRajT | 2023-12-04T01:36:08Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-04T00:52:31Z | ---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
migtissera/Tess-7B-v1.4 | migtissera | 2023-12-04T01:34:29Z | 1,618 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-04T01:21:27Z | ---
license: apache-2.0
---
# Tess

Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-XS-v1.4 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
``` |
annabellehuether/partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd | annabellehuether | 2023-12-04T01:33:30Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-04T00:55:26Z | ---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6765
- Accuracy: 0.6485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6388 | 1.0 | 660 | 0.5573 | 0.6578 |
| 0.5927 | 2.0 | 1320 | 0.5635 | 0.6578 |
| 0.5289 | 3.0 | 1980 | 0.6765 | 0.6485 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Reglacia/Miyuki | Reglacia | 2023-12-04T01:30:48Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:artistic-2.0",
"region:us"
] | text-to-image | 2023-12-04T01:23:38Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/IMG_1343.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: artistic-2.0
---
# Miyuki Izayoi
<Gallery />
## Model description
This is Miyuki Izayoi. She is a blader and a singer. She a beyblade oc for MFB
## Download model
[Download](/Reglacia/Miyuki/tree/main) them in the Files & versions tab.
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow8 | FounderOfHuggingface | 2023-12-04T01:30:34Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T01:30:32Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
ThuyNT03/KLTN_COQE_viT5_POSAL | ThuyNT03 | 2023-12-04T01:27:24Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_POSAL",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_POSAL",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T10:56:43Z | ---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_POSAL
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_POSAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_POSAL
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_POSAL](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_POSAL) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
hkivancoral/smids_1x_deit_small_rms_00001_fold3 | hkivancoral | 2023-12-04T01:21:56Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-04T00:50:37Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_small_rms_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_small_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7182
- Accuracy: 0.905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3259 | 1.0 | 75 | 0.3001 | 0.89 |
| 0.2426 | 2.0 | 150 | 0.3217 | 0.8717 |
| 0.1676 | 3.0 | 225 | 0.2596 | 0.9083 |
| 0.1287 | 4.0 | 300 | 0.2827 | 0.895 |
| 0.0316 | 5.0 | 375 | 0.3452 | 0.885 |
| 0.0237 | 6.0 | 450 | 0.3793 | 0.9017 |
| 0.0244 | 7.0 | 525 | 0.4128 | 0.8967 |
| 0.0233 | 8.0 | 600 | 0.4590 | 0.8883 |
| 0.0286 | 9.0 | 675 | 0.4790 | 0.8983 |
| 0.0295 | 10.0 | 750 | 0.4835 | 0.8917 |
| 0.0562 | 11.0 | 825 | 0.4705 | 0.9067 |
| 0.0087 | 12.0 | 900 | 0.5035 | 0.9033 |
| 0.0083 | 13.0 | 975 | 0.5418 | 0.9017 |
| 0.0001 | 14.0 | 1050 | 0.5563 | 0.9 |
| 0.0012 | 15.0 | 1125 | 0.5874 | 0.8983 |
| 0.0001 | 16.0 | 1200 | 0.5698 | 0.8967 |
| 0.0001 | 17.0 | 1275 | 0.5930 | 0.9033 |
| 0.0062 | 18.0 | 1350 | 0.5972 | 0.9017 |
| 0.0048 | 19.0 | 1425 | 0.5918 | 0.9033 |
| 0.0089 | 20.0 | 1500 | 0.6518 | 0.9017 |
| 0.0001 | 21.0 | 1575 | 0.7835 | 0.885 |
| 0.0001 | 22.0 | 1650 | 0.6700 | 0.9 |
| 0.0031 | 23.0 | 1725 | 0.6679 | 0.8983 |
| 0.0 | 24.0 | 1800 | 0.6364 | 0.9033 |
| 0.0001 | 25.0 | 1875 | 0.6464 | 0.8983 |
| 0.003 | 26.0 | 1950 | 0.6535 | 0.8967 |
| 0.0 | 27.0 | 2025 | 0.6525 | 0.8983 |
| 0.0 | 28.0 | 2100 | 0.6526 | 0.8983 |
| 0.0 | 29.0 | 2175 | 0.6663 | 0.895 |
| 0.0 | 30.0 | 2250 | 0.6645 | 0.8983 |
| 0.0 | 31.0 | 2325 | 0.6717 | 0.9 |
| 0.0 | 32.0 | 2400 | 0.6659 | 0.8983 |
| 0.0 | 33.0 | 2475 | 0.6774 | 0.9017 |
| 0.0051 | 34.0 | 2550 | 0.6726 | 0.905 |
| 0.0059 | 35.0 | 2625 | 0.7209 | 0.8933 |
| 0.0031 | 36.0 | 2700 | 0.6818 | 0.9067 |
| 0.0022 | 37.0 | 2775 | 0.6938 | 0.8967 |
| 0.0 | 38.0 | 2850 | 0.6968 | 0.8967 |
| 0.0 | 39.0 | 2925 | 0.7122 | 0.8983 |
| 0.0 | 40.0 | 3000 | 0.7008 | 0.8983 |
| 0.0 | 41.0 | 3075 | 0.7070 | 0.8983 |
| 0.0026 | 42.0 | 3150 | 0.7002 | 0.9 |
| 0.0025 | 43.0 | 3225 | 0.7107 | 0.9 |
| 0.0 | 44.0 | 3300 | 0.7106 | 0.9033 |
| 0.0025 | 45.0 | 3375 | 0.7116 | 0.905 |
| 0.0025 | 46.0 | 3450 | 0.7142 | 0.905 |
| 0.0047 | 47.0 | 3525 | 0.7163 | 0.9033 |
| 0.0 | 48.0 | 3600 | 0.7169 | 0.9033 |
| 0.0 | 49.0 | 3675 | 0.7178 | 0.9033 |
| 0.0045 | 50.0 | 3750 | 0.7182 | 0.905 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow7 | FounderOfHuggingface | 2023-12-04T01:18:56Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T01:18:53Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
iyoussef1079/bert-finetuned-ner | iyoussef1079 | 2023-12-04T01:12:01Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-04T00:26:33Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9368787276341949
- name: Recall
type: recall
value: 0.9516997643890945
- name: F1
type: f1
value: 0.9442310903322757
- name: Accuracy
type: accuracy
value: 0.9870930711720728
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0277
- Precision: 0.9369
- Recall: 0.9517
- F1: 0.9442
- Accuracy: 0.9871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0336 | 1.0 | 1756 | 0.0350 | 0.9037 | 0.9334 | 0.9183 | 0.9811 |
| 0.0168 | 2.0 | 3512 | 0.0269 | 0.9305 | 0.9504 | 0.9403 | 0.9865 |
| 0.0095 | 3.0 | 5268 | 0.0277 | 0.9369 | 0.9517 | 0.9442 | 0.9871 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.13.2
|
LarryAIDraw/narmaya | LarryAIDraw | 2023-12-04T01:11:36Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:03:46Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/218371/narmaya-granblue-fantasy-or-goofy-ai |
LarryAIDraw/implacable | LarryAIDraw | 2023-12-04T01:11:16Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:02:43Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/218797/implacable-azur-lane |
LarryAIDraw/HanyaV4-10 | LarryAIDraw | 2023-12-04T01:11:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:02:23Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/218139/hanya-lora-honkai-star-rail |
LarryAIDraw/ServalLandauV2 | LarryAIDraw | 2023-12-04T01:10:46Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:01:31Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/157125/serval-landau-honkai-star-rail |
LarryAIDraw/suzukagozen-fate-richy-v1 | LarryAIDraw | 2023-12-04T01:10:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:01:08Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/220820/suzuka-gozentate-eboshijk-saber-fate-lora-or-6-outfits |
LarryAIDraw/ShizukaV2 | LarryAIDraw | 2023-12-04T01:10:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:00:45Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/75924/shizuka-masou-rance-series |
Kuwon/chkpt | Kuwon | 2023-12-04T01:05:02Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:generator",
"base_model:monologg/koelectra-small-v3-discriminator",
"base_model:finetune:monologg/koelectra-small-v3-discriminator",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-01T04:04:08Z | ---
base_model: monologg/koelectra-small-v3-discriminator
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: chkpt
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8826086956521739
- name: F1
type: f1
value: 0.8275730495029622
- name: Precision
type: precision
value: 0.7789981096408317
- name: Recall
type: recall
value: 0.8826086956521739
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chkpt
This model is a fine-tuned version of [monologg/koelectra-small-v3-discriminator](https://huggingface.co/monologg/koelectra-small-v3-discriminator) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2815
- Accuracy: 0.8826
- F1: 0.8276
- Precision: 0.7790
- Recall: 0.8826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 29 | 1.2815 | 0.8826 | 0.8276 | 0.7790 | 0.8826 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hydrochii/marian-finetuned-kde4-en-to-fr | hydrochii | 2023-12-04T01:00:24Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-12-03T22:32:07Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.91104527365588
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.9110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
ij5/pixel | ij5 | 2023-12-04T00:57:04Z | 9 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-12-04T00:56:46Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/girl.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# pixel
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/ij5/pixel/tree/main) them in the Files & versions tab.
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow5 | FounderOfHuggingface | 2023-12-04T00:55:42Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T00:55:37Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
kvriza8/blip2-opt-2.7b-AF-captions | kvriza8 | 2023-12-04T00:48:19Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Salesforce/blip2-opt-2.7b",
"base_model:adapter:Salesforce/blip2-opt-2.7b",
"region:us"
] | null | 2023-12-04T00:48:13Z | ---
library_name: peft
base_model: Salesforce/blip2-opt-2.7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0 |
VitaliiVrublevskyi/albert-xlarge-v1-finetuned-mrpc | VitaliiVrublevskyi | 2023-12-04T00:44:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T20:52:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: albert-xlarge-v1-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8848039215686274
- name: F1
type: f1
value: 0.9176882661996497
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xlarge-v1-finetuned-mrpc
This model is a fine-tuned version of [albert-xlarge-v1](https://huggingface.co/albert-xlarge-v1) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5464
- Accuracy: 0.8848
- F1: 0.9177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 15
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.4669 | 0.7770 | 0.8378 |
| No log | 2.0 | 460 | 0.3652 | 0.8578 | 0.9017 |
| 0.5294 | 3.0 | 690 | 0.3426 | 0.8775 | 0.9110 |
| 0.5294 | 4.0 | 920 | 0.3292 | 0.8799 | 0.9136 |
| 0.2589 | 5.0 | 1150 | 0.5464 | 0.8848 | 0.9177 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
Seongill/nq_mrc_cbr_checkpoints | Seongill | 2023-12-04T00:37:15Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-03T05:15:30Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: nq_mrc_cbr_checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nq_mrc_cbr_checkpoints
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
platzi/platzi-vit-model-aleckeith | platzi | 2023-12-04T00:34:41Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-03T22:22:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit-model-aleckeith
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9774436090225563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-aleckeith
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0621
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1238 | 3.85 | 500 | 0.0621 | 0.9774 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
JoseGarcia2002/submodel-3 | JoseGarcia2002 | 2023-12-04T00:33:52Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-04T00:29:29Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### submodel_3 Dreambooth model trained by JoseGarcia2002 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
annabellehuether/partisan-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd | annabellehuether | 2023-12-04T00:28:48Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T23:25:49Z | ---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7014
- Accuracy: 0.6763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6555 | 1.0 | 660 | 0.5453 | 0.6563 |
| 0.603 | 2.0 | 1320 | 0.5560 | 0.67 |
| 0.5715 | 3.0 | 1980 | 0.5691 | 0.6641 |
| 0.4327 | 4.0 | 2640 | 0.6462 | 0.6648 |
| 0.3684 | 5.0 | 3300 | 0.7014 | 0.6763 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
nrshoudi/hubert_base_arabic_mdd | nrshoudi | 2023-12-04T00:28:17Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-03T21:33:45Z | ---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: hubert_base_arabic_mdd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert_base_arabic_mdd
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2265
- Wer: 1.0
- Per: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Per |
|:-------------:|:-----:|:-----:|:---------------:|:---:|:---:|
| 6.526 | 1.0 | 1637 | 3.3650 | 1.0 | 1.0 |
| 3.2555 | 2.0 | 3274 | 3.2755 | 1.0 | 1.0 |
| 3.2548 | 3.0 | 4911 | 3.2238 | 1.0 | 1.0 |
| 3.2385 | 4.0 | 6548 | 3.2845 | 1.0 | 1.0 |
| 3.2358 | 5.0 | 8185 | 3.2271 | 1.0 | 1.0 |
| 3.237 | 6.0 | 9822 | 3.2473 | 1.0 | 1.0 |
| 3.2622 | 7.0 | 11459 | 3.2289 | 1.0 | 1.0 |
| 3.2614 | 8.0 | 13096 | 3.2283 | 1.0 | 1.0 |
| 3.224 | 9.0 | 14733 | 3.2249 | 1.0 | 1.0 |
| 3.2221 | 10.0 | 16370 | 3.2335 | 1.0 | 1.0 |
| 3.222 | 11.0 | 18007 | 3.2357 | 1.0 | 1.0 |
| 3.2218 | 12.0 | 19644 | 3.2491 | 1.0 | 1.0 |
| 3.2183 | 13.0 | 21281 | 3.2446 | 1.0 | 1.0 |
| 3.2181 | 14.0 | 22918 | 3.2416 | 1.0 | 1.0 |
| 3.2164 | 15.0 | 24555 | 3.2259 | 1.0 | 1.0 |
| 3.2148 | 16.0 | 26192 | 3.2249 | 1.0 | 1.0 |
| 3.2139 | 17.0 | 27829 | 3.2327 | 1.0 | 1.0 |
| 3.2133 | 18.0 | 29466 | 3.2251 | 1.0 | 1.0 |
| 3.2128 | 19.0 | 31103 | 3.2288 | 1.0 | 1.0 |
| 3.2113 | 20.0 | 32740 | 3.2265 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
JoseGarcia2002/submodel-1 | JoseGarcia2002 | 2023-12-04T00:27:58Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-04T00:24:01Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### submodel_1_redo Dreambooth model trained by JoseGarcia2002 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ThuyNT03/KLTN_COQE_viT5_SOPAL | ThuyNT03 | 2023-12-04T00:25:23Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_SOPAL",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_SOPAL",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-02T16:27:37Z | ---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_SOPAL
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_SOPAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_SOPAL
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_SOPAL](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_SOPAL) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
ThuyNT03/KLTN_COQE_viT5_PSAOL | ThuyNT03 | 2023-12-04T00:23:32Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_PSAOL",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_PSAOL",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T09:10:34Z | ---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_PSAOL
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_PSAOL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_PSAOL
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_PSAOL](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_PSAOL) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
annabellehuether/partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd | annabellehuether | 2023-12-04T00:23:05Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T23:19:51Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8492
- Accuracy: 0.6396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6391 | 1.0 | 660 | 0.5539 | 0.6330 |
| 0.6032 | 2.0 | 1320 | 0.5506 | 0.6507 |
| 0.5625 | 3.0 | 1980 | 0.6238 | 0.6489 |
| 0.4003 | 4.0 | 2640 | 0.7708 | 0.6363 |
| 0.3281 | 5.0 | 3300 | 0.8492 | 0.6396 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow2 | FounderOfHuggingface | 2023-12-04T00:20:45Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T00:20:43Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
hkivancoral/smids_1x_deit_small_rms_00001_fold1 | hkivancoral | 2023-12-04T00:13:59Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-03T23:42:28Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_small_rms_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8848080133555927
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_small_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7203
- Accuracy: 0.8848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4024 | 1.0 | 76 | 0.3457 | 0.8598 |
| 0.2939 | 2.0 | 152 | 0.3056 | 0.8765 |
| 0.1494 | 3.0 | 228 | 0.3010 | 0.8815 |
| 0.1219 | 4.0 | 304 | 0.3026 | 0.8848 |
| 0.0709 | 5.0 | 380 | 0.3230 | 0.8881 |
| 0.0265 | 6.0 | 456 | 0.3473 | 0.8915 |
| 0.0053 | 7.0 | 532 | 0.4250 | 0.8815 |
| 0.0086 | 8.0 | 608 | 0.4355 | 0.8848 |
| 0.0119 | 9.0 | 684 | 0.4635 | 0.8865 |
| 0.0011 | 10.0 | 760 | 0.4824 | 0.8932 |
| 0.0255 | 11.0 | 836 | 0.5139 | 0.8831 |
| 0.0006 | 12.0 | 912 | 0.5793 | 0.8815 |
| 0.0183 | 13.0 | 988 | 0.5403 | 0.8848 |
| 0.0037 | 14.0 | 1064 | 0.5951 | 0.8848 |
| 0.024 | 15.0 | 1140 | 0.5951 | 0.8815 |
| 0.0002 | 16.0 | 1216 | 0.6061 | 0.8798 |
| 0.0001 | 17.0 | 1292 | 0.5992 | 0.8948 |
| 0.0157 | 18.0 | 1368 | 0.6206 | 0.8848 |
| 0.0002 | 19.0 | 1444 | 0.6514 | 0.8881 |
| 0.0058 | 20.0 | 1520 | 0.6656 | 0.8798 |
| 0.0096 | 21.0 | 1596 | 0.6589 | 0.8915 |
| 0.0045 | 22.0 | 1672 | 0.6509 | 0.8848 |
| 0.0001 | 23.0 | 1748 | 0.6180 | 0.8881 |
| 0.0001 | 24.0 | 1824 | 0.6676 | 0.8765 |
| 0.0077 | 25.0 | 1900 | 0.6271 | 0.8831 |
| 0.0032 | 26.0 | 1976 | 0.7135 | 0.8848 |
| 0.0043 | 27.0 | 2052 | 0.7062 | 0.8765 |
| 0.0034 | 28.0 | 2128 | 0.7064 | 0.8781 |
| 0.0062 | 29.0 | 2204 | 0.6764 | 0.8781 |
| 0.0001 | 30.0 | 2280 | 0.6847 | 0.8831 |
| 0.006 | 31.0 | 2356 | 0.6868 | 0.8865 |
| 0.009 | 32.0 | 2432 | 0.7122 | 0.8881 |
| 0.0 | 33.0 | 2508 | 0.7011 | 0.8865 |
| 0.0 | 34.0 | 2584 | 0.7102 | 0.8881 |
| 0.0121 | 35.0 | 2660 | 0.7023 | 0.8881 |
| 0.0034 | 36.0 | 2736 | 0.7188 | 0.8765 |
| 0.0064 | 37.0 | 2812 | 0.7029 | 0.8848 |
| 0.0001 | 38.0 | 2888 | 0.7098 | 0.8798 |
| 0.0031 | 39.0 | 2964 | 0.7171 | 0.8815 |
| 0.0 | 40.0 | 3040 | 0.7137 | 0.8815 |
| 0.0029 | 41.0 | 3116 | 0.7143 | 0.8815 |
| 0.0 | 42.0 | 3192 | 0.7224 | 0.8815 |
| 0.0048 | 43.0 | 3268 | 0.7157 | 0.8831 |
| 0.0 | 44.0 | 3344 | 0.7190 | 0.8848 |
| 0.0 | 45.0 | 3420 | 0.7200 | 0.8848 |
| 0.0 | 46.0 | 3496 | 0.7204 | 0.8848 |
| 0.0 | 47.0 | 3572 | 0.7209 | 0.8848 |
| 0.0024 | 48.0 | 3648 | 0.7205 | 0.8848 |
| 0.0 | 49.0 | 3724 | 0.7204 | 0.8848 |
| 0.0 | 50.0 | 3800 | 0.7203 | 0.8848 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
adm3ws/ppo-LunarLander-v2 | adm3ws | 2023-12-04T00:09:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-04T00:09:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.26 +/- 16.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mkbackup/testing_model | mkbackup | 2023-12-04T00:09:01Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"bn",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-03T00:58:07Z | ---
language:
- bn
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - BN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - BN
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
SaiedAlshahrani/bloom_3B_8bit_qlora_flores_v2 | SaiedAlshahrani | 2023-12-04T00:08:53Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
] | null | 2023-12-03T23:12:39Z | ---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_8bit_qlora_flores_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_8bit_qlora_flores_v2
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.4.0
- Tokenizers 0.15.0
|
afrideva/Astridboros-3B-GGUF | afrideva | 2023-12-04T00:08:14Z | 38 | 3 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"base_model:Aryanne/Astridboros-3B",
"base_model:quantized:Aryanne/Astridboros-3B",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | 2023-12-03T23:57:52Z | ---
base_model: Aryanne/Astridboros-3B
inference: false
language:
- en
library_name: transformers
license: cc-by-sa-4.0
model_creator: Aryanne
model_name: Astridboros-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gpt
- llm
- large language model
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# Aryanne/Astridboros-3B-GGUF
Quantized GGUF model files for [Astridboros-3B](https://huggingface.co/Aryanne/Astridboros-3B) from [Aryanne](https://huggingface.co/Aryanne)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [astridboros-3b.fp16.gguf](https://huggingface.co/afrideva/Astridboros-3B-GGUF/resolve/main/astridboros-3b.fp16.gguf) | fp16 | 5.59 GB |
| [astridboros-3b.q2_k.gguf](https://huggingface.co/afrideva/Astridboros-3B-GGUF/resolve/main/astridboros-3b.q2_k.gguf) | q2_k | 1.20 GB |
| [astridboros-3b.q3_k_m.gguf](https://huggingface.co/afrideva/Astridboros-3B-GGUF/resolve/main/astridboros-3b.q3_k_m.gguf) | q3_k_m | 1.39 GB |
| [astridboros-3b.q4_k_m.gguf](https://huggingface.co/afrideva/Astridboros-3B-GGUF/resolve/main/astridboros-3b.q4_k_m.gguf) | q4_k_m | 1.71 GB |
| [astridboros-3b.q5_k_m.gguf](https://huggingface.co/afrideva/Astridboros-3B-GGUF/resolve/main/astridboros-3b.q5_k_m.gguf) | q5_k_m | 1.99 GB |
| [astridboros-3b.q6_k.gguf](https://huggingface.co/afrideva/Astridboros-3B-GGUF/resolve/main/astridboros-3b.q6_k.gguf) | q6_k | 2.30 GB |
| [astridboros-3b.q8_0.gguf](https://huggingface.co/afrideva/Astridboros-3B-GGUF/resolve/main/astridboros-3b.q8_0.gguf) | q8_0 | 2.97 GB |
## Original Model Card:
This model is a merge/fusion of [PAIXAI/Astrid-3B](https://huggingface.co/PAIXAI/Astrid-3B) and [jondurbin/airoboros-3b-3p0](https://huggingface.co/jondurbin/airoboros-3b-3p0) , 16 layers of each glued together(see Astridboros.yml or below).
```yaml
slices:
- sources:
- model: PAIXAI/Astrid-3B
layer_range: [0, 16]
- sources:
- model: jondurbin/airoboros-3b-3p0
layer_range: [16, 32]
merge_method: passthrough
dtype: float16
``` |
anoram/rads-lit-llama | anoram | 2023-12-04T00:07:38Z | 4 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2023-12-04T00:04:10Z | Fine-tuned model for simplifying radiology reports. Meant to be run locally using rads-lit web tool. |
Tonio-V98T/ppo-LunarLander-v2 | Tonio-V98T | 2023-12-03T23:54:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T23:53:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.24 +/- 17.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AIisnotapig/Taxi-v3 | AIisnotapig | 2023-12-03T23:52:17Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T23:52:15Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AIisnotapig/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AIisnotapig/q-FrozenLake-v1-4x4-noSlippery | AIisnotapig | 2023-12-03T23:50:27Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T23:50:25Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AIisnotapig/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
annabellehuether/partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd | annabellehuether | 2023-12-03T23:49:40Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T22:46:30Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8345
- Accuracy: 0.6452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.64 | 1.0 | 660 | 0.5560 | 0.6315 |
| 0.6054 | 2.0 | 1320 | 0.5527 | 0.6556 |
| 0.5649 | 3.0 | 1980 | 0.6155 | 0.6556 |
| 0.4036 | 4.0 | 2640 | 0.7546 | 0.6415 |
| 0.329 | 5.0 | 3300 | 0.8345 | 0.6452 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
annabellehuether/partisan-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd | annabellehuether | 2023-12-03T23:44:32Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T23:06:54Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6113
- Accuracy: 0.6626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6394 | 1.0 | 660 | 0.5579 | 0.6304 |
| 0.6046 | 2.0 | 1320 | 0.5532 | 0.6574 |
| 0.5691 | 3.0 | 1980 | 0.6113 | 0.6626 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
annabellehuether/partisan-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd | annabellehuether | 2023-12-03T23:44:15Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T22:41:41Z | ---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7406
- Accuracy: 0.6641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6392 | 1.0 | 660 | 0.5436 | 0.6589 |
| 0.5941 | 2.0 | 1320 | 0.5680 | 0.6615 |
| 0.5499 | 3.0 | 1980 | 0.5949 | 0.66 |
| 0.3922 | 4.0 | 2640 | 0.6951 | 0.6622 |
| 0.3281 | 5.0 | 3300 | 0.7406 | 0.6641 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
maniack/my_awesome_opus_books_model | maniack | 2023-12-03T23:43:58Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-02T08:03:23Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 5.6445
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6050
- Bleu: 5.6445
- Gen Len: 17.5895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8499 | 1.0 | 6355 | 1.6289 | 5.4681 | 17.597 |
| 1.8303 | 2.0 | 12710 | 1.6050 | 5.6445 | 17.5895 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
UDACA/gpt2-51M-1.31B-PubMedAbs | UDACA | 2023-12-03T23:35:27Z | 41 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-03T23:21:38Z | ---
{}
---
# Model Details
- **Architecture**: Basic/default GPT-2, decoder only
- **Num params**: ~50M
- **Num tokens seen**: ~1.31 B
- **Dataset**: PubMed *Abstracts* subset of The Pile |
UDACA/gpt2-51M-1.31B-USPTO | UDACA | 2023-12-03T23:35:12Z | 41 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-03T23:24:25Z | ---
{}
---
# Model Details
- **Architecture**: Basic/default GPT-2, decoder only
- **Num params**: ~50M
- **Num tokens seen**: ~1.31 B
- **Dataset**: USPTO subset of The Pile |
preranar/my_awesome_model | preranar | 2023-12-03T23:26:44Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T22:24:59Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2293
- Accuracy: 0.9314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2245 | 1.0 | 1563 | 0.2001 | 0.9226 |
| 0.1469 | 2.0 | 3126 | 0.2293 | 0.9314 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
healthcorum/v6uk-2tya-ixkr-0 | healthcorum | 2023-12-03T23:24:26Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"autotrain",
"dataset:healthcorum/autotrain-data-v6uk-2tya-ixkr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T23:23:28Z |
---
tags:
- autotrain
- text2text-generation
widget:
- text: "I love AutoTrain"
datasets:
- healthcorum/autotrain-data-v6uk-2tya-ixkr
---
# Model Trained Using AutoTrain
- Problem type: Seq2Seq
## Validation Metrics
loss: 1.1659743785858154
rouge1: 14.9893
rouge2: 10.2707
rougeL: 14.4389
rougeLsum: 14.7875
gen_len: 20.0
runtime: 191.3514
samples_per_second: 10.452
steps_per_second: 0.653
: 3.0
|
ThuyNT03/KLTN_COQE_viT5_SOAPL | ThuyNT03 | 2023-12-03T23:20:32Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_SOAPL",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_SOAPL",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-02T15:41:19Z | ---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_SOAPL
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_SOAPL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_SOAPL
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_SOAPL](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_SOAPL) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
ThuyNT03/KLTN_COQE_viT5_PSOAL | ThuyNT03 | 2023-12-03T23:19:04Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_PSOAL",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_PSOAL",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-03T09:08:48Z | ---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_PSOAL
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_PSOAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_PSOAL
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_PSOAL](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_PSOAL) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
ThuyNT03/KLTN_COQE_viT5_OSAPL | ThuyNT03 | 2023-12-03T23:10:25Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_OSAPL",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_OSAPL",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-02T20:20:23Z | ---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_OSAPL
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_OSAPL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_OSAPL
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_OSAPL](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_OSAPL) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow37 | FounderOfHuggingface | 2023-12-03T22:59:28Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T22:59:25Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
VitaliiVrublevskyi/bert-large-uncased-finetuned-mrpc | VitaliiVrublevskyi | 2023-12-03T22:55:58Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T13:07:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8578431372549019
- name: F1
type: f1
value: 0.9006849315068494
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6016
- Accuracy: 0.8578
- F1: 0.9007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 91
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 115 | 0.4435 | 0.8162 | 0.8777 |
| No log | 2.0 | 230 | 0.3542 | 0.8407 | 0.8870 |
| No log | 3.0 | 345 | 0.4246 | 0.8652 | 0.9063 |
| No log | 4.0 | 460 | 0.5290 | 0.8578 | 0.9010 |
| 0.2887 | 5.0 | 575 | 0.6016 | 0.8578 | 0.9007 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow36 | FounderOfHuggingface | 2023-12-03T22:47:50Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T22:47:46Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
judy93536/distilroberta-rbm231k-ep20-op40-all-agree_2p2k | judy93536 | 2023-12-03T22:46:55Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:judy93536/distilroberta-rbm231k-ep20-op40",
"base_model:finetune:judy93536/distilroberta-rbm231k-ep20-op40",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-03T22:17:21Z | ---
license: apache-2.0
base_model: judy93536/distilroberta-rbm231k-ep20-op40
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- accuracy
model-index:
- name: distilroberta-rbm231k-ep20-op40-all-agree_2p2k
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_allagree
split: train
args: sentences_allagree
metrics:
- name: Accuracy
type: accuracy
value: 0.9602649006622517
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-rbm231k-ep20-op40-all-agree_2p2k
This model is a fine-tuned version of [judy93536/distilroberta-rbm231k-ep20-op40](https://huggingface.co/judy93536/distilroberta-rbm231k-ep20-op40) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1320
- Accuracy: 0.9603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.253335054745316e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.4
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 114 | 1.0789 | 0.4327 |
| No log | 2.0 | 228 | 1.0442 | 0.6115 |
| No log | 3.0 | 342 | 0.9709 | 0.6137 |
| No log | 4.0 | 456 | 0.8693 | 0.6115 |
| 1.0223 | 5.0 | 570 | 0.8346 | 0.6115 |
| 1.0223 | 6.0 | 684 | 0.7876 | 0.6115 |
| 1.0223 | 7.0 | 798 | 0.7355 | 0.6203 |
| 1.0223 | 8.0 | 912 | 0.6974 | 0.6733 |
| 0.7904 | 9.0 | 1026 | 0.6535 | 0.7219 |
| 0.7904 | 10.0 | 1140 | 0.6045 | 0.7550 |
| 0.7904 | 11.0 | 1254 | 0.5653 | 0.7770 |
| 0.7904 | 12.0 | 1368 | 0.5122 | 0.7859 |
| 0.7904 | 13.0 | 1482 | 0.4652 | 0.7881 |
| 0.5806 | 14.0 | 1596 | 0.4319 | 0.7991 |
| 0.5806 | 15.0 | 1710 | 0.3951 | 0.8057 |
| 0.5806 | 16.0 | 1824 | 0.3557 | 0.8168 |
| 0.5806 | 17.0 | 1938 | 0.3174 | 0.8565 |
| 0.3751 | 18.0 | 2052 | 0.2652 | 0.9007 |
| 0.3751 | 19.0 | 2166 | 0.2188 | 0.9404 |
| 0.3751 | 20.0 | 2280 | 0.1797 | 0.9470 |
| 0.3751 | 21.0 | 2394 | 0.1822 | 0.9492 |
| 0.1873 | 22.0 | 2508 | 0.1523 | 0.9514 |
| 0.1873 | 23.0 | 2622 | 0.1425 | 0.9581 |
| 0.1873 | 24.0 | 2736 | 0.1394 | 0.9581 |
| 0.1873 | 25.0 | 2850 | 0.1396 | 0.9603 |
| 0.1873 | 26.0 | 2964 | 0.1345 | 0.9603 |
| 0.1072 | 27.0 | 3078 | 0.1334 | 0.9603 |
| 0.1072 | 28.0 | 3192 | 0.1322 | 0.9603 |
| 0.1072 | 29.0 | 3306 | 0.1316 | 0.9603 |
| 0.1072 | 30.0 | 3420 | 0.1320 | 0.9603 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
linqus/dqn-SpaceInvadersNoFrameskip-v4 | linqus | 2023-12-03T22:38:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T22:37:47Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 548.50 +/- 116.00
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga linqus -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga linqus -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga linqus
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
kadabengaran/distilbert-base-uncased-lora-text-classification | kadabengaran | 2023-12-03T22:36:54Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2023-12-03T22:13:54Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2719
- Accuracy: {'accuracy': 0.9169444444444445}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|
| 0.676 | 1.0 | 1050 | 0.5094 | {'accuracy': 0.8197222222222222} |
| 0.4394 | 2.0 | 2100 | 0.3866 | {'accuracy': 0.8675} |
| 0.3705 | 3.0 | 3150 | 0.3472 | {'accuracy': 0.8822222222222222} |
| 0.3458 | 4.0 | 4200 | 0.3141 | {'accuracy': 0.8908333333333334} |
| 0.3287 | 5.0 | 5250 | 0.3063 | {'accuracy': 0.8977777777777778} |
| 0.2942 | 6.0 | 6300 | 0.2930 | {'accuracy': 0.9033333333333333} |
| 0.2735 | 7.0 | 7350 | 0.2864 | {'accuracy': 0.9091666666666667} |
| 0.2856 | 8.0 | 8400 | 0.2797 | {'accuracy': 0.9122222222222223} |
| 0.2826 | 9.0 | 9450 | 0.2800 | {'accuracy': 0.9113888888888889} |
| 0.2728 | 10.0 | 10500 | 0.2731 | {'accuracy': 0.9147222222222222} |
| 0.2674 | 11.0 | 11550 | 0.2763 | {'accuracy': 0.9136111111111112} |
| 0.2454 | 12.0 | 12600 | 0.2742 | {'accuracy': 0.915} |
| 0.2661 | 13.0 | 13650 | 0.2716 | {'accuracy': 0.9177777777777778} |
| 0.2704 | 14.0 | 14700 | 0.2721 | {'accuracy': 0.9172222222222223} |
| 0.2735 | 15.0 | 15750 | 0.2719 | {'accuracy': 0.9169444444444445} |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow35 | FounderOfHuggingface | 2023-12-03T22:36:13Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T22:36:10Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
mjalg/mistral-medquad-finetune | mjalg | 2023-12-03T22:28:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-12-03T22:28:14Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0 |
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow34 | FounderOfHuggingface | 2023-12-03T22:24:37Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T22:24:33Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
afrideva/Astrohermes-3B-GGUF | afrideva | 2023-12-03T22:24:26Z | 10 | 1 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"stablelm",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"base_model:Aryanne/Astrohermes-3B",
"base_model:quantized:Aryanne/Astrohermes-3B",
"license:cc-by-sa-4.0",
"region:us"
] | text-generation | 2023-12-03T22:14:38Z | ---
base_model: Aryanne/Astrohermes-3B
inference: false
language:
- en
library_name: transformers
license: cc-by-sa-4.0
model_creator: Aryanne
model_name: Astrohermes-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gpt
- llm
- stablelm
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# Aryanne/Astrohermes-3B-GGUF
Quantized GGUF model files for [Astrohermes-3B](https://huggingface.co/Aryanne/Astrohermes-3B) from [Aryanne](https://huggingface.co/Aryanne)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [astrohermes-3b.fp16.gguf](https://huggingface.co/afrideva/Astrohermes-3B-GGUF/resolve/main/astrohermes-3b.fp16.gguf) | fp16 | 5.59 GB |
| [astrohermes-3b.q2_k.gguf](https://huggingface.co/afrideva/Astrohermes-3B-GGUF/resolve/main/astrohermes-3b.q2_k.gguf) | q2_k | 1.20 GB |
| [astrohermes-3b.q3_k_m.gguf](https://huggingface.co/afrideva/Astrohermes-3B-GGUF/resolve/main/astrohermes-3b.q3_k_m.gguf) | q3_k_m | 1.39 GB |
| [astrohermes-3b.q4_k_m.gguf](https://huggingface.co/afrideva/Astrohermes-3B-GGUF/resolve/main/astrohermes-3b.q4_k_m.gguf) | q4_k_m | 1.71 GB |
| [astrohermes-3b.q5_k_m.gguf](https://huggingface.co/afrideva/Astrohermes-3B-GGUF/resolve/main/astrohermes-3b.q5_k_m.gguf) | q5_k_m | 1.99 GB |
| [astrohermes-3b.q6_k.gguf](https://huggingface.co/afrideva/Astrohermes-3B-GGUF/resolve/main/astrohermes-3b.q6_k.gguf) | q6_k | 2.30 GB |
| [astrohermes-3b.q8_0.gguf](https://huggingface.co/afrideva/Astrohermes-3B-GGUF/resolve/main/astrohermes-3b.q8_0.gguf) | q8_0 | 2.97 GB |
## Original Model Card:
This model is a mix of [PAIXAI/Astrid-3B](https://huggingface.co/PAIXAI/Astrid-3B) + [jondurbin/airoboros-3b-3p0](https://huggingface.co/jondurbin/airoboros-3b-3p0) + [cxllin/StableHermes-3b](https://huggingface.co/cxllin/StableHermes-3b), as shown in the yaml(see Astrohermes.yml or below).
[Aryanne/Astridboros-3B](https://huggingface.co/Aryanne/Astridboros-3B) = PAIXAI/Astrid-3B + jondurbin/airoboros-3b-3p0
```yaml
slices:
- sources:
- model: Aryanne/Astridboros-3B
layer_range: [0, 15]
- sources:
- model: cxllin/StableHermes-3b
layer_range: [15, 16]
- sources:
- model: Aryanne/Astridboros-3B
layer_range: [16, 17]
- sources:
- model: cxllin/StableHermes-3b
layer_range: [17, 18]
- sources:
- model: Aryanne/Astridboros-3B
layer_range: [18, 19]
- sources:
- model: cxllin/StableHermes-3b
layer_range: [19, 20]
- sources:
- model: Aryanne/Astridboros-3B
layer_range: [20, 21]
- sources:
- model: cxllin/StableHermes-3b
layer_range: [21, 22]
- sources:
- model: Aryanne/Astridboros-3B
layer_range: [22, 23]
- sources:
- model: cxllin/StableHermes-3b
layer_range: [23, 24]
- sources:
- model: Aryanne/Astridboros-3B
layer_range: [24, 32]
merge_method: passthrough
dtype: float16
``` |
GPT-JF/Model_1A_Clinton | GPT-JF | 2023-12-03T22:22:48Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-28T12:17:04Z | ---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
model-index:
- name: Model_1A_Clinton
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model_1A_Clinton
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on a large corpus of William J. Clinton's second term discourse on terrorism.
## To Prompt the Model
Try entering single words or short phrases, such as "terrorism is" or "national security" or "our foreign policy should be",
in the dialogue box on the right hand side of this page.
Then click on 'compute' and wait for the results. The model will take a few seconds to load on your first prompt.
## Intended uses & limitations
This model is intended as an experiment on the utility of LLMs for discourse analysis on a specific corpus of political rhetoric.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_1x_beit_base_rms_0001_fold4 | hkivancoral | 2023-12-03T21:57:38Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-02T19:31:22Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_beit_base_rms_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_beit_base_rms_0001_fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6670
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1121 | 1.0 | 75 | 1.0797 | 0.495 |
| 1.1167 | 2.0 | 150 | 1.0990 | 0.3383 |
| 1.1124 | 3.0 | 225 | 1.0945 | 0.3583 |
| 1.0914 | 4.0 | 300 | 1.0750 | 0.35 |
| 1.0647 | 5.0 | 375 | 0.8667 | 0.5733 |
| 0.9583 | 6.0 | 450 | 0.8905 | 0.51 |
| 0.8629 | 7.0 | 525 | 0.7806 | 0.5767 |
| 0.8438 | 8.0 | 600 | 0.7603 | 0.5833 |
| 0.812 | 9.0 | 675 | 0.7613 | 0.595 |
| 0.7427 | 10.0 | 750 | 0.8115 | 0.5917 |
| 0.8147 | 11.0 | 825 | 0.7428 | 0.63 |
| 0.7859 | 12.0 | 900 | 0.7365 | 0.635 |
| 0.8142 | 13.0 | 975 | 0.7468 | 0.6033 |
| 0.7961 | 14.0 | 1050 | 0.7567 | 0.5983 |
| 0.6725 | 15.0 | 1125 | 0.7876 | 0.6067 |
| 0.7608 | 16.0 | 1200 | 0.7339 | 0.635 |
| 0.7146 | 17.0 | 1275 | 0.7178 | 0.645 |
| 0.6646 | 18.0 | 1350 | 0.7089 | 0.67 |
| 0.7767 | 19.0 | 1425 | 0.7436 | 0.6433 |
| 0.7149 | 20.0 | 1500 | 0.7664 | 0.655 |
| 0.7622 | 21.0 | 1575 | 0.7227 | 0.6617 |
| 0.6643 | 22.0 | 1650 | 0.7547 | 0.64 |
| 0.7546 | 23.0 | 1725 | 0.7439 | 0.6483 |
| 0.727 | 24.0 | 1800 | 0.7101 | 0.6633 |
| 0.7334 | 25.0 | 1875 | 0.7022 | 0.6583 |
| 0.6824 | 26.0 | 1950 | 0.7040 | 0.6767 |
| 0.7383 | 27.0 | 2025 | 0.6953 | 0.6733 |
| 0.6459 | 28.0 | 2100 | 0.6860 | 0.6883 |
| 0.7094 | 29.0 | 2175 | 0.6882 | 0.695 |
| 0.7817 | 30.0 | 2250 | 0.6855 | 0.6883 |
| 0.6417 | 31.0 | 2325 | 0.6762 | 0.705 |
| 0.7236 | 32.0 | 2400 | 0.6870 | 0.6917 |
| 0.6676 | 33.0 | 2475 | 0.7290 | 0.685 |
| 0.5839 | 34.0 | 2550 | 0.6648 | 0.7117 |
| 0.6323 | 35.0 | 2625 | 0.6543 | 0.7017 |
| 0.6129 | 36.0 | 2700 | 0.6910 | 0.6883 |
| 0.5785 | 37.0 | 2775 | 0.6666 | 0.7217 |
| 0.6055 | 38.0 | 2850 | 0.6452 | 0.7233 |
| 0.5778 | 39.0 | 2925 | 0.6586 | 0.7217 |
| 0.5892 | 40.0 | 3000 | 0.6725 | 0.7233 |
| 0.6346 | 41.0 | 3075 | 0.6632 | 0.715 |
| 0.5806 | 42.0 | 3150 | 0.6697 | 0.7217 |
| 0.6328 | 43.0 | 3225 | 0.6659 | 0.7117 |
| 0.5711 | 44.0 | 3300 | 0.6651 | 0.71 |
| 0.5685 | 45.0 | 3375 | 0.6727 | 0.7283 |
| 0.4903 | 46.0 | 3450 | 0.6607 | 0.7383 |
| 0.5197 | 47.0 | 3525 | 0.6770 | 0.7283 |
| 0.5572 | 48.0 | 3600 | 0.6616 | 0.7183 |
| 0.5197 | 49.0 | 3675 | 0.6636 | 0.73 |
| 0.489 | 50.0 | 3750 | 0.6670 | 0.7333 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
QFun/checkpoint_Sign_256 | QFun | 2023-12-03T21:53:37Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-12-02T07:17:36Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-QFun/checkpoint_Sign_256
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
You can find some example images below.
prompt:

prompt:

|
wei23/ppo-LunarLander-v2 | wei23 | 2023-12-03T21:51:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T21:51:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.91 +/- 18.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Acetyl/CartPole-v1 | Acetyl | 2023-12-03T21:51:00Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-03T21:50:56Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 204.90 +/- 94.51
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_member_shadow31 | FounderOfHuggingface | 2023-12-03T21:49:54Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-03T21:49:51Z | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Miotvinnik00/my_awesome_food_model | Miotvinnik00 | 2023-12-03T21:43:00Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-03T21:34:03Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.918
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8575
- Accuracy: 0.918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1974 | 0.99 | 62 | 1.1935 | 0.901 |
| 0.8604 | 2.0 | 125 | 0.9183 | 0.914 |
| 0.7686 | 2.98 | 186 | 0.8575 | 0.918 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Pi3141/alpaca-7b-native-enhanced-GPTQ | Pi3141 | 2023-12-03T21:39:44Z | 0 | 2 | adapter-transformers | [
"adapter-transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:wtfpl",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2023-12-03T19:20:09Z | ---
license: wtfpl
language:
- en
pipeline_tag: text-generation
tags:
- llama
library_name: adapter-transformers
---
Safetensor version: [pi3141/alpaca-7b-native-enhanced-GPTQ-safetensors](https://huggingface.co/Pi3141/alpaca-7b-native-enhanced-GPTQ-safetensors)
### About the GPTQ version
- Quantized to 4-bits 128g using GPTQ-for-LLaMA.
- Intended for use with Oobabooga Text Generation WebUI.
### Loading model in Oobabooga WebUI
- Use same parameters as the original model, which can be found in the original repo linked below.
- Use `AutoGPTQ` loader.
### Information about original model
*Original repo: [8bit-coder/alpaca-7b-nativeEnhanced](https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced)*
*Alternate: [pi3141/alpaca-7b-native-enhanced](https://huggingface.co/pi3141/alpaca-7b-native-enhanced)*
Below are information about the original model
---
<p align="center"><img src="https://cdn-uploads.huggingface.co/production/uploads/615a1b7a321f65c4da59c3d3/DFHgrYeqJNIchgLrgfZzl.png" height=256></p>
<h1 align="center">
Alpaca 7B Native Enhanced
</h1>
<p align="center">The Most Advanced Alpaca 7B Model</p>
## π Model Facts
- Trained natively on 8x Nvidia A100 40GB GPUs; no LoRA used
- Trained on the largest & most accurate dataset yet
- Enhanced Programming Capabilities
- First Alpaca model to have conversational awareness
## π Quick Start Guide
Step 1. Make sure git-lfs is installed and ready to use ([Guide](https://git-lfs.com/))
Step 2. Download and install [text-generation-webui](https://github.com/oobabooga/text-generation-webui) according to the repository's instructions
Step 3. Navigate over to one of it's model folders and clone this repository:
git clone https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced
Step 4. Launch the webui, replace "Your name" with "User" and replace the default instruction prompt with:
> You are an AI language model designed to assist the User by answering their questions, offering advice, and engaging in casual conversation in a friendly, helpful, and informative manner. You respond clearly, coherently, and you consider the conversation history.
>
> User: Hey, how's it going?
>
> Assistant: Hey there! I'm doing great, thank you. What can I help you with today? Let's have a fun chat!
Step 5. Change the settings to match this screenshot:

## π Training
#### We used 8x Nvidia A100 40GB GPUs for training this model. Training time took ~3 hours and resulting loss was 0.4761 over 3 epochs. The command used for training is as follows
> **torchrun --nproc_per_node=8 --master_port=3045 ./stanford_alpaca/train.py --model_name_or_path ./llama-7b-hf --data_path ./alpaca-7b-nativeEnhanced/training_files/alpaca-megaset-fixed.json --fp16 True --output_dir ./output_7b --num_train_epochs 3 --per_device_train_batch_size 2 --per_device_eval_batch_size 2 --gradient_accumulation_steps 16 --evaluation_strategy "no" --save_strategy "steps" --save_steps 200 --learning_rate 2e-5 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine" --logging_steps 1 --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' --tf32 True**
There's a folder in this repository called training_files. **full-training-instructions.txt** is the full list of commands from start to finish of training, to converting the model all the way to 4 bit quantized ggml. **It is not recommended to quantize this model down to 4 bits. The instructions are included purely for informational purposes.**
In addition, the training instructions file is built specifically for rented cloud computing. This means that by following the commands in the file, anyone should be able to train a similar model.
### Common errors while training
- CUDA Out of Memory error
- This is because your GPUs do not have a minimum of 40GB of vram. The weakest GPU that we've been able to successfully train on has been Nvidia A100 40GB. Even with 8 of these, the vram usage was almost always right up at the limit. If you have 40GB GPUs and are still running into this error, try halving the **per_device_train_batch_size** and **per_device_eval_batch_size** and doubling the **gradient_accumulation_steps**. If you have more than 40GB of vram per GPU and wish to train faster, the opposite applies.
- LLaMATokenizer error
- This happens because you forgot to fix tokenizer_config.json in the llama-7b-hf directory. The fix is to rename **LLaMATokenizer** to **LlamaTokenizer** in that file.
- RuntimeError: CUDA error: invalid device ordinal
- This error occurs when your **nproc_per_node** is set to a number greater than how many GPUs you have installed in your system. You can check how many GPUs you have installed by running **nvidia-smi**.
- torchrun is not recognized
- This error occurs when you have a python version older than 3.10. Follow the instructions in the training instructions file to install miniconda and get python 3.10 set up. Circumventing this error by running python -m torch.distributed.run will **not work**. Many of the dependencies require python 3.10 and will fatally error out at the start of training.
- KeyError
- This happens when your JSON training data is broken in some way. Try running the dataset_validator.py in the training_files folder to find the broken key.
## π Notes
- The main version of this model is in the hugging face transformers data type. The other one (.pth) format is provided **purely for experimental use with llama.cpp** and is not guaranteed to have conversational awareness.
- This model exhibits weird behavior when quantized to 4 bits. This might be due to the complexity of the model. We recommend the smallest quantization to be 8 bits, but this is untested.
- This model is slightly **underfitted**. We observed that training the model with a smaller gradient accumulation size benefitted the response quality.
- This model appears to have full conversational awareness. This means that provided you're running the model in the same configuration we detailed in the Quick Start Guide, you should be able to hold very detailed conversation with the AI without issues. There is a limit to it's memory, and it's 2048 tokens. Beyond that, it'll forget details and will need to be reminded.
## π§ Dataset
The dataset used for training this model is made from [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned) and [codealpaca](https://github.com/sahil280114/codealpaca). We combined these datasets for the following reasons:
1. Increased accuracy since the original stanford_alpaca dataset had many errors.
2. Better knowledge in programming
3. More training data
We had an issue with the latest AlpacaDataCleaned dataset where at around 90k lines in, one of the keys has a typo. The key is "instruction:" instead of "instruction". We have fixed this error in the provided megaset but if you plan on grabbing directly from AlpacaDataCleaned, make sure to fix this error. Otherwise, the training script will fail due to a KeyError.
## π¨βπ» Credits
Credits go to [Meta](https://github.com/facebookresearch/llama) for creating the foundational LLaMA models and [Stanford](https://github.com/tatsu-lab/stanford_alpaca) for the instructions on how to train. For the dataset, credits go to [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned) and [codealpaca](https://github.com/sahil280114/codealpaca). Credits also go to [chavinlo](https://huggingface.co/chavinlo/alpaca-native) for creating the original Alpaca 7B Native model, the inspiration behind this model.
Lastly, credits go to the homies that stayed up all night again and again: 8bit, Ο, chug, Taddy, yoyodapro, Symax, and most importantly: stablediffusion for the beautiful artwork
|
Pi3141/alpaca-7b-native-enhanced-GPTQ-safetensors | Pi3141 | 2023-12-03T21:39:32Z | 0 | 1 | adapter-transformers | [
"adapter-transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:wtfpl",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2023-12-03T21:09:48Z | ---
license: wtfpl
language:
- en
pipeline_tag: text-generation
tags:
- llama
library_name: adapter-transformers
---
Non-safetensor version: [pi3141/alpaca-7b-native-enhanced-GPTQ](https://huggingface.co/Pi3141/alpaca-7b-native-enhanced-GPTQ)
### About the GPTQ version
- Quantized to 4-bits 128g using GPTQ-for-LLaMA.
- Intended for use with Oobabooga Text Generation WebUI.
### Loading model in Oobabooga WebUI
- Use same parameters as the original model, which can be found in the original repo linked below.
- Use `ExLlamav2` loader.
### Information about original model
*Original repo: [8bit-coder/alpaca-7b-nativeEnhanced](https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced)*
*Alternate: [pi3141/alpaca-7b-native-enhanced](https://huggingface.co/pi3141/alpaca-7b-native-enhanced)*
Below are information about the original model
---
<p align="center"><img src="https://cdn-uploads.huggingface.co/production/uploads/615a1b7a321f65c4da59c3d3/DFHgrYeqJNIchgLrgfZzl.png" height=256></p>
<h1 align="center">
Alpaca 7B Native Enhanced
</h1>
<p align="center">The Most Advanced Alpaca 7B Model</p>
## π Model Facts
- Trained natively on 8x Nvidia A100 40GB GPUs; no LoRA used
- Trained on the largest & most accurate dataset yet
- Enhanced Programming Capabilities
- First Alpaca model to have conversational awareness
## π Quick Start Guide
Step 1. Make sure git-lfs is installed and ready to use ([Guide](https://git-lfs.com/))
Step 2. Download and install [text-generation-webui](https://github.com/oobabooga/text-generation-webui) according to the repository's instructions
Step 3. Navigate over to one of it's model folders and clone this repository:
git clone https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced
Step 4. Launch the webui, replace "Your name" with "User" and replace the default instruction prompt with:
> You are an AI language model designed to assist the User by answering their questions, offering advice, and engaging in casual conversation in a friendly, helpful, and informative manner. You respond clearly, coherently, and you consider the conversation history.
>
> User: Hey, how's it going?
>
> Assistant: Hey there! I'm doing great, thank you. What can I help you with today? Let's have a fun chat!
Step 5. Change the settings to match this screenshot:

## π Training
#### We used 8x Nvidia A100 40GB GPUs for training this model. Training time took ~3 hours and resulting loss was 0.4761 over 3 epochs. The command used for training is as follows
> **torchrun --nproc_per_node=8 --master_port=3045 ./stanford_alpaca/train.py --model_name_or_path ./llama-7b-hf --data_path ./alpaca-7b-nativeEnhanced/training_files/alpaca-megaset-fixed.json --fp16 True --output_dir ./output_7b --num_train_epochs 3 --per_device_train_batch_size 2 --per_device_eval_batch_size 2 --gradient_accumulation_steps 16 --evaluation_strategy "no" --save_strategy "steps" --save_steps 200 --learning_rate 2e-5 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine" --logging_steps 1 --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' --tf32 True**
There's a folder in this repository called training_files. **full-training-instructions.txt** is the full list of commands from start to finish of training, to converting the model all the way to 4 bit quantized ggml. **It is not recommended to quantize this model down to 4 bits. The instructions are included purely for informational purposes.**
In addition, the training instructions file is built specifically for rented cloud computing. This means that by following the commands in the file, anyone should be able to train a similar model.
### Common errors while training
- CUDA Out of Memory error
- This is because your GPUs do not have a minimum of 40GB of vram. The weakest GPU that we've been able to successfully train on has been Nvidia A100 40GB. Even with 8 of these, the vram usage was almost always right up at the limit. If you have 40GB GPUs and are still running into this error, try halving the **per_device_train_batch_size** and **per_device_eval_batch_size** and doubling the **gradient_accumulation_steps**. If you have more than 40GB of vram per GPU and wish to train faster, the opposite applies.
- LLaMATokenizer error
- This happens because you forgot to fix tokenizer_config.json in the llama-7b-hf directory. The fix is to rename **LLaMATokenizer** to **LlamaTokenizer** in that file.
- RuntimeError: CUDA error: invalid device ordinal
- This error occurs when your **nproc_per_node** is set to a number greater than how many GPUs you have installed in your system. You can check how many GPUs you have installed by running **nvidia-smi**.
- torchrun is not recognized
- This error occurs when you have a python version older than 3.10. Follow the instructions in the training instructions file to install miniconda and get python 3.10 set up. Circumventing this error by running python -m torch.distributed.run will **not work**. Many of the dependencies require python 3.10 and will fatally error out at the start of training.
- KeyError
- This happens when your JSON training data is broken in some way. Try running the dataset_validator.py in the training_files folder to find the broken key.
## π Notes
- The main version of this model is in the hugging face transformers data type. The other one (.pth) format is provided **purely for experimental use with llama.cpp** and is not guaranteed to have conversational awareness.
- This model exhibits weird behavior when quantized to 4 bits. This might be due to the complexity of the model. We recommend the smallest quantization to be 8 bits, but this is untested.
- This model is slightly **underfitted**. We observed that training the model with a smaller gradient accumulation size benefitted the response quality.
- This model appears to have full conversational awareness. This means that provided you're running the model in the same configuration we detailed in the Quick Start Guide, you should be able to hold very detailed conversation with the AI without issues. There is a limit to it's memory, and it's 2048 tokens. Beyond that, it'll forget details and will need to be reminded.
## π§ Dataset
The dataset used for training this model is made from [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned) and [codealpaca](https://github.com/sahil280114/codealpaca). We combined these datasets for the following reasons:
1. Increased accuracy since the original stanford_alpaca dataset had many errors.
2. Better knowledge in programming
3. More training data
We had an issue with the latest AlpacaDataCleaned dataset where at around 90k lines in, one of the keys has a typo. The key is "instruction:" instead of "instruction". We have fixed this error in the provided megaset but if you plan on grabbing directly from AlpacaDataCleaned, make sure to fix this error. Otherwise, the training script will fail due to a KeyError.
## π¨βπ» Credits
Credits go to [Meta](https://github.com/facebookresearch/llama) for creating the foundational LLaMA models and [Stanford](https://github.com/tatsu-lab/stanford_alpaca) for the instructions on how to train. For the dataset, credits go to [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned) and [codealpaca](https://github.com/sahil280114/codealpaca). Credits also go to [chavinlo](https://huggingface.co/chavinlo/alpaca-native) for creating the original Alpaca 7B Native model, the inspiration behind this model.
Lastly, credits go to the homies that stayed up all night again and again: 8bit, Ο, chug, Taddy, yoyodapro, Symax, and most importantly: stablediffusion for the beautiful artwork
|
IlyaGusev/fred_t5_ru_turbo_alpaca | IlyaGusev | 2023-12-03T21:34:38Z | 151 | 19 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text-generation",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-04-14T10:41:15Z | ---
language:
- ru
pipeline_tag: text-generation
inference: false
datasets:
- IlyaGusev/ru_turbo_alpaca
---
Colab: [link](https://colab.research.google.com/drive/1W6DsQPLinVnuJKqhVASYpuVwuHhhtGLc?usp=sharing) |
prushton/dreambooth-myra | prushton | 2023-12-03T21:28:41Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-03T20:51:30Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of myra
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - prushton/dreambooth-myra
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of myra using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
akashmaggon/vit-base-crack-classification-aug-last | akashmaggon | 2023-12-03T21:25:55Z | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-03T21:06:17Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: vit-base-crack-classification-aug-last
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-crack-classification-aug-last
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0124
- F1: 0.9943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4012 | 1.0 | 212 | 0.3809 | 0.8400 |
| 0.1153 | 2.0 | 424 | 0.1429 | 0.9465 |
| 0.0467 | 3.0 | 636 | 0.0742 | 0.9628 |
| 0.0097 | 4.0 | 848 | 0.0194 | 0.9907 |
| 0.0062 | 5.0 | 1060 | 0.0163 | 0.9943 |
| 0.0039 | 6.0 | 1272 | 0.0124 | 0.9943 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
stoves/Popova_Anastasia | stoves | 2023-12-03T21:22:45Z | 4 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-11-10T13:21:11Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of gjdfophge person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Subsets and Splits