modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 06:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 06:27:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Archit001a/distilroberta-base-finetuned-log
|
Archit001a
| 2023-12-04T08:20:42Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-04T08:16:35Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_keras_callback
model-index:
- name: Archit001a/distilroberta-base-finetuned-log
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Archit001a/distilroberta-base-finetuned-log
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5662
- Validation Loss: 0.4704
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.9561 | 0.6193 | 0 |
| 0.6436 | 0.5314 | 1 |
| 0.5662 | 0.4704 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_1x_deit_small_rms_0001_fold3
|
hkivancoral
| 2023-12-04T08:08:39Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-04T03:40:59Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_small_rms_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7016666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_small_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1553
- Accuracy: 0.7017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.14 | 1.0 | 75 | 1.1120 | 0.335 |
| 1.2072 | 2.0 | 150 | 1.0986 | 0.3333 |
| 0.9539 | 3.0 | 225 | 0.9334 | 0.4917 |
| 0.9512 | 4.0 | 300 | 0.9203 | 0.4983 |
| 0.911 | 5.0 | 375 | 1.0159 | 0.445 |
| 0.9061 | 6.0 | 450 | 0.9432 | 0.5133 |
| 0.8557 | 7.0 | 525 | 0.9707 | 0.5517 |
| 0.796 | 8.0 | 600 | 0.8853 | 0.5633 |
| 0.837 | 9.0 | 675 | 0.8169 | 0.5667 |
| 0.8343 | 10.0 | 750 | 0.8015 | 0.5867 |
| 0.8478 | 11.0 | 825 | 0.8424 | 0.5533 |
| 0.7471 | 12.0 | 900 | 0.8480 | 0.5733 |
| 0.7041 | 13.0 | 975 | 0.8701 | 0.55 |
| 0.7689 | 14.0 | 1050 | 0.7602 | 0.625 |
| 0.6385 | 15.0 | 1125 | 0.8263 | 0.5933 |
| 0.7131 | 16.0 | 1200 | 0.7809 | 0.595 |
| 0.7152 | 17.0 | 1275 | 0.8940 | 0.565 |
| 0.7023 | 18.0 | 1350 | 0.7651 | 0.66 |
| 0.6514 | 19.0 | 1425 | 0.7331 | 0.6783 |
| 0.7116 | 20.0 | 1500 | 0.7305 | 0.6883 |
| 0.6713 | 21.0 | 1575 | 0.7155 | 0.6733 |
| 0.634 | 22.0 | 1650 | 0.7520 | 0.6883 |
| 0.664 | 23.0 | 1725 | 0.7448 | 0.6767 |
| 0.5579 | 24.0 | 1800 | 0.7383 | 0.6967 |
| 0.6505 | 25.0 | 1875 | 0.7438 | 0.69 |
| 0.6223 | 26.0 | 1950 | 0.7719 | 0.65 |
| 0.5322 | 27.0 | 2025 | 0.7151 | 0.7017 |
| 0.5674 | 28.0 | 2100 | 0.7078 | 0.6817 |
| 0.493 | 29.0 | 2175 | 0.7341 | 0.71 |
| 0.585 | 30.0 | 2250 | 0.7150 | 0.6867 |
| 0.534 | 31.0 | 2325 | 0.7507 | 0.6967 |
| 0.458 | 32.0 | 2400 | 0.7455 | 0.6983 |
| 0.512 | 33.0 | 2475 | 0.6902 | 0.6967 |
| 0.5074 | 34.0 | 2550 | 0.6773 | 0.6983 |
| 0.512 | 35.0 | 2625 | 0.6981 | 0.7083 |
| 0.452 | 36.0 | 2700 | 0.7620 | 0.7083 |
| 0.4013 | 37.0 | 2775 | 0.7597 | 0.7033 |
| 0.4319 | 38.0 | 2850 | 0.7472 | 0.705 |
| 0.4551 | 39.0 | 2925 | 0.8012 | 0.7067 |
| 0.4136 | 40.0 | 3000 | 0.7673 | 0.7133 |
| 0.4092 | 41.0 | 3075 | 0.8184 | 0.7067 |
| 0.412 | 42.0 | 3150 | 0.8145 | 0.7183 |
| 0.4199 | 43.0 | 3225 | 0.8148 | 0.725 |
| 0.3632 | 44.0 | 3300 | 0.8661 | 0.69 |
| 0.2849 | 45.0 | 3375 | 0.9491 | 0.7167 |
| 0.3044 | 46.0 | 3450 | 0.9227 | 0.7017 |
| 0.2713 | 47.0 | 3525 | 0.9951 | 0.6983 |
| 0.22 | 48.0 | 3600 | 1.0641 | 0.7017 |
| 0.2276 | 49.0 | 3675 | 1.1632 | 0.6983 |
| 0.2183 | 50.0 | 3750 | 1.1553 | 0.7017 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sd-dreambooth-library/dog-ppt-model
|
sd-dreambooth-library
| 2023-12-04T07:58:54Z | 12 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-14T05:57:21Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### dog_PPt_Model on Stable Diffusion via Dreambooth
#### model by LK0608
This your the Stable Diffusion model fine-tuned the dog_PPt_Model concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks dog**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
ashioyajotham/falcon-coder
|
ashioyajotham
| 2023-12-04T07:56:38Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-12-04T06:58:17Z |
---
library_name: peft
base_model: ybelkada/falcon-7b-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
tomytjandra/blip2-opt-2.7b-football-captions-adapters
|
tomytjandra
| 2023-12-04T07:55:50Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"region:us"
] | null | 2023-12-04T07:55:48Z |
---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
BELLE-2/BELLE-VL
|
BELLE-2
| 2023-12-04T07:55:18Z | 68 | 28 |
transformers
|
[
"transformers",
"pytorch",
"qwen",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-11-23T09:03:43Z |
---
license: apache-2.0
---
# Model Card for Model ID
## Welcome
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !
## 📝Belle-VL
### 背景介绍
**社区目前已经有很多多模态大语言模型相关开源工作,但大多以英文能力为主,比如[LLava](https://github.com/haotian-liu/LLaVA),[CogVLM](https://github.com/THUDM/CogVLM)等,而中文多模态大语言模型比如[VisualGLM-6B](https://github.com/THUDM/VisualGLM-6B)、[Qwen-VL](https://github.com/QwenLM/Qwen-VL)的语言模型基座均较小,实际应用中很难兼顾视觉和语言能力,因此Belle-VL选择基于更强的语言模型基座来扩展模型的视觉能力,为社区提供更加灵活的选择。**
### 模型简介
在模型结构方面,我们主要参考的[Qwen-VL](https://github.com/QwenLM/Qwen-VL)模型,原始Qwen-VL是基于Qwen7B模型训练而来,基座能力相对较弱,因此Belle-VL将语言模型扩展成了[Qwen14B-chat](https://huggingface.co/Qwen/Qwen-14B-Chat),在中文语言能力和视觉能力方面可以兼顾,具备更好的扩展性。
### 训练策略
原始Qwen-vl采用了三阶段的训练方式,包括预训练、多任务训练和指令微调,依赖较大的数据和机器资源。受LLava1.5的启发,多模态指令微调比预训练更加重要,因此我们采用了两阶段的训练方式,如下图所示:

### 训练数据
* **预训练数据**:预训练数据主要是基于LLava 的[558k](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain)英文指令数据及其对应的中文翻译数据,此外我们还收集了[Flickr30k-CNA](https://zero.so.com/) 以及从[AI Challenger](https://tianchi.aliyun.com/dataset/145781?spm=a2c22.12282016.0.0.5c823721PG2nBW)随机选取的100k数据
* **多模态指令数据**:指令微调阶段,数据主要来自[LLava](https://github.com/haotian-liu/LLaVA), [LRV-Instruction](https://github.com/FuxiaoLiu/LRV-Instruction), [LLaVAR](https://github.com/SALT-NLP/LLaVAR),[LVIS-INSTRUCT4V](https://github.com/X2FD/LVIS-INSTRUCT4V)等开源项目,我们也对其中部分数据进行了翻译,在此真诚的感谢他们为开源所做出的贡献!
### 模型使用
``` python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_dir = '/path/to_finetuned_model/'
img_path = 'you_image_path'
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_dir, trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained(model_dir, trust_remote_code=True)
question = '详细描述一下这张图'
query = tokenizer.from_list_format([
{'image': img_path}, # Either a local path or an url
{'text': question},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
#or
query = f'<img>{img_path}</img>\n{question}'
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
```
### MME Benchmark
[MME](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation)是一个针对多模态大型语言模型的全面评估基准。它在总共14个子任务上测量感知和认知能力,包括
包括存在性、计数、位置、颜色、海报、名人、场景、地标、艺术作品、OCR、常识推理、数值计算、文本翻译和代码推理等。目前最新的BELLE-VL模型在感知评测维度共获得**1620.10**分,超过LLava和Qwen-VL.详情如下:
| Category | Score |
|------------------------|-------|
| **Perception** | **1620.10** |
| --Existence | 195.00 |
| --Count | 173.33 |
| --Position | 1310.00 |
| --Color | 185.00 |
| --Posters | 160.88|
| --Celebrity | 135.88|
| --Scene | 150.00|
| --Landmark | 169.25 |
| --Artwork | 143.50 |
| --OCR | 177.50 |
| Category | Score |
|------------------------|-------|
| **Cognition** | **305.36** |
| --Commonsense Reasoning | 132.86|
| --Numerical Calculation | 42.50 |
| --Text Translation | 72.50 |
| --Code Reasoning | 57.00 |
### 模型不足
当前模型仅基于开源数据训练,仍存在不足,用户可基于自身需要继续微调强化
* 目前模型仅支持单张图片的交互
* 目前在中文ocr场景能力较弱
## Citation
Please cite our paper and github when using our code, data or model.
```
@misc{BELLE,
author = {BELLEGroup},
title = {BELLE: Be Everyone's Large Language model Engine},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
```
|
JuanMa360/ppo-LunarLander-v2
|
JuanMa360
| 2023-12-04T07:50:37Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-04T07:44:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.48 +/- 9.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
```python
!pip install shimmy
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
repo_id = "JuanMa360/ppo-LunarLander-v2"
filename = "ppo-LunarLander-v2.zip"
# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5 (we done it at the beginning of the colab)
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
```
|
yily/glm-nwfe-sft-70000
|
yily
| 2023-12-04T07:47:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-04T07:46:25Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
enaitzb/a2c-PandaReach-v3
|
enaitzb
| 2023-12-04T07:45:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReach-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-04T07:41:03Z |
---
library_name: stable-baselines3
tags:
- PandaReach-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReach-v3
type: PandaReach-v3
metrics:
- type: mean_reward
value: -2.20 +/- 0.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **PandaReach-v3**
This is a trained model of a **PPO** agent playing **PandaReach-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
archerfmy0831/sd-t2i-360panoimage
|
archerfmy0831
| 2023-12-04T07:35:41Z | 0 | 20 |
diffusers
|
[
"diffusers",
"license:apache-2.0",
"region:us"
] | null | 2023-10-25T11:51:46Z |
---
license: apache-2.0
---
This repo stores model files for https://github.com/ArcherFMY/SD-T2I-360PanoImage
|
hkivancoral/smids_1x_deit_small_rms_0001_fold2
|
hkivancoral
| 2023-12-04T07:34:58Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-04T03:07:16Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_small_rms_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.757071547420965
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_small_rms_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6765
- Accuracy: 0.7571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1018 | 1.0 | 75 | 1.0371 | 0.4260 |
| 0.991 | 2.0 | 150 | 0.9921 | 0.4792 |
| 0.9572 | 3.0 | 225 | 0.9534 | 0.4692 |
| 0.9605 | 4.0 | 300 | 0.9410 | 0.4942 |
| 1.0177 | 5.0 | 375 | 0.9782 | 0.4792 |
| 0.8824 | 6.0 | 450 | 0.9530 | 0.4775 |
| 0.9937 | 7.0 | 525 | 1.2068 | 0.4143 |
| 0.9218 | 8.0 | 600 | 0.9562 | 0.4842 |
| 0.9543 | 9.0 | 675 | 0.9220 | 0.4892 |
| 0.9236 | 10.0 | 750 | 0.9222 | 0.4958 |
| 0.9252 | 11.0 | 825 | 0.8952 | 0.5075 |
| 0.8897 | 12.0 | 900 | 0.8977 | 0.5042 |
| 0.8737 | 13.0 | 975 | 0.8116 | 0.5691 |
| 0.8039 | 14.0 | 1050 | 0.7757 | 0.5790 |
| 0.7793 | 15.0 | 1125 | 0.8219 | 0.5824 |
| 0.8231 | 16.0 | 1200 | 0.7679 | 0.6057 |
| 0.8017 | 17.0 | 1275 | 0.7881 | 0.5874 |
| 0.7891 | 18.0 | 1350 | 0.8079 | 0.5990 |
| 0.7545 | 19.0 | 1425 | 0.7312 | 0.6456 |
| 0.7578 | 20.0 | 1500 | 0.7753 | 0.6123 |
| 0.8565 | 21.0 | 1575 | 0.7816 | 0.6073 |
| 0.7262 | 22.0 | 1650 | 0.8273 | 0.5840 |
| 0.7951 | 23.0 | 1725 | 0.7247 | 0.6339 |
| 0.7867 | 24.0 | 1800 | 0.7753 | 0.6173 |
| 0.7108 | 25.0 | 1875 | 0.7213 | 0.6805 |
| 0.6679 | 26.0 | 1950 | 0.7131 | 0.6556 |
| 0.7183 | 27.0 | 2025 | 0.7432 | 0.6456 |
| 0.6589 | 28.0 | 2100 | 0.6919 | 0.6938 |
| 0.6988 | 29.0 | 2175 | 0.7014 | 0.6689 |
| 0.6704 | 30.0 | 2250 | 0.6664 | 0.7038 |
| 0.6348 | 31.0 | 2325 | 0.6647 | 0.7038 |
| 0.6316 | 32.0 | 2400 | 0.6641 | 0.6988 |
| 0.5915 | 33.0 | 2475 | 0.6743 | 0.6839 |
| 0.6102 | 34.0 | 2550 | 0.6568 | 0.7038 |
| 0.5452 | 35.0 | 2625 | 0.6346 | 0.7271 |
| 0.5721 | 36.0 | 2700 | 0.6475 | 0.7255 |
| 0.5908 | 37.0 | 2775 | 0.6240 | 0.7388 |
| 0.6069 | 38.0 | 2850 | 0.6538 | 0.7354 |
| 0.4947 | 39.0 | 2925 | 0.6146 | 0.7438 |
| 0.4469 | 40.0 | 3000 | 0.6694 | 0.7038 |
| 0.5595 | 41.0 | 3075 | 0.5969 | 0.7438 |
| 0.524 | 42.0 | 3150 | 0.6251 | 0.7438 |
| 0.5223 | 43.0 | 3225 | 0.6144 | 0.7338 |
| 0.4414 | 44.0 | 3300 | 0.6374 | 0.7404 |
| 0.5093 | 45.0 | 3375 | 0.6328 | 0.7488 |
| 0.4116 | 46.0 | 3450 | 0.6556 | 0.7537 |
| 0.414 | 47.0 | 3525 | 0.6472 | 0.7604 |
| 0.445 | 48.0 | 3600 | 0.6566 | 0.7488 |
| 0.3661 | 49.0 | 3675 | 0.6775 | 0.7504 |
| 0.3935 | 50.0 | 3750 | 0.6765 | 0.7571 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
asas-ai/bloom_3B_8bit_qlora_mlqa_v2
|
asas-ai
| 2023-12-04T07:29:42Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
] | null | 2023-12-04T07:28:56Z |
---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_8bit_qlora_mlqa_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_8bit_qlora_mlqa_v2
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
brettbbb/llama_finetune_race_20_cot
|
brettbbb
| 2023-12-04T07:27:14Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-04T05:21:29Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama_finetune_race_20_cot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_finetune_race_20_cot
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2116 | 1.0 | 150 | 1.3188 |
| 0.8601 | 2.0 | 300 | 1.3724 |
| 0.5773 | 3.0 | 450 | 1.4924 |
| 0.6222 | 4.0 | 600 | 1.6729 |
| 0.2511 | 5.0 | 750 | 1.8350 |
| 0.1554 | 6.0 | 900 | 2.0826 |
| 0.1467 | 7.0 | 1050 | 2.2027 |
| 0.0909 | 8.0 | 1200 | 2.2817 |
| 0.0713 | 9.0 | 1350 | 2.3923 |
| 0.0501 | 10.0 | 1500 | 2.6003 |
| 0.0526 | 11.0 | 1650 | 2.5589 |
| 0.0522 | 12.0 | 1800 | 2.5545 |
| 0.0485 | 13.0 | 1950 | 2.7130 |
| 0.0297 | 14.0 | 2100 | 2.8527 |
| 0.03 | 15.0 | 2250 | 2.8907 |
| 0.0327 | 16.0 | 2400 | 3.0280 |
| 0.0351 | 17.0 | 2550 | 3.0299 |
| 0.0381 | 18.0 | 2700 | 3.1626 |
| 0.031 | 19.0 | 2850 | 3.2051 |
| 0.028 | 20.0 | 3000 | 3.2366 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
Sapnil/ppo-LunarLander-v2
|
Sapnil
| 2023-12-04T07:25:53Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-04T07:19:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 228.72 +/- 12.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
filipealmeida/Mistral-7B-Instruct-v0.1-sharded
|
filipealmeida
| 2023-12-04T07:17:15Z | 1,065 | 13 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"finetuned",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-28T00:59:50Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
---
# Sharded version of Mistral-7B-Instruct-v0.1
This is the sharded version of Mistral-7B-Instruct-v0.1 so you can use it when you have limited CPU memory
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False)
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
TigerResearch/tigerbot-13b-base-v3
|
TigerResearch
| 2023-12-04T07:10:44Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-30T01:53:20Z |
---
license: apache-2.0
language:
- zh
- en
---
<div style="width: 100%;">
<p align="center" width="20%">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" width="20%", style="display: block; margin: auto;"></img>
</p>
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
💻<a href="https://github.com/TigerResearch/TigerBot" target="_blank">Github</a> • 🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
# 快速开始
- 方法1,通过transformers使用
- 下载 TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- 启动infer代码
```shell
python infer.py --model_path TigerResearch/tigerbot-13b-base-v3 --model_type base --max_generate_length 64
```
- 方法2:
- 下载 TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- 安装git lfs: `git lfs install`
- 通过huggingface或modelscope平台下载权重
```shell
git clone https://huggingface.co/TigerResearch/tigerbot-13b-base-v3
git clone https://www.modelscope.cn/TigerResearch/tigerbot-13b-base-v3.git
```
- 启动infer代码
```shell
python infer.py --model_path tigerbot-13b-base-v3 --model_type base --max_generate_length 64
```
------
# Quick Start
- Method 1, use through transformers
- Clone TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- Run infer script
```shell
python infer.py --model_path TigerResearch/tigerbot-13b-base-v3 --model_type base --max_generate_length 64
```
- Method 2:
- Clone TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- install git lfs: `git lfs install`
- Download weights from huggingface or modelscope
```shell
git clone https://huggingface.co/TigerResearch/tigerbot-13b-base-v3
git clone https://www.modelscope.cn/TigerResearch/tigerbot-13b-base-v3.git
```
- Run infer script
```shell
python infer.py --model_path tigerbot-13b-base-v3 --model_type base --max_generate_length 64
```
|
yj2773/hinglish11k-sentiment-analysis
|
yj2773
| 2023-12-04T07:01:03Z | 32 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"ur",
"hi",
"multilingual",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-07T16:11:03Z |
---
language:
- en
- ur
- hi
- multilingual
license: afl-3.0
widget:
- text: Tum bohot badiya ho.
---
## Hinglish-Bert-Class fine-tuned on Hinglish11K dataset.
# MCC= 0.69
### Citation info
```bibtex
@model{
contributors= {Mohammad Yusuf Jamal Aziz Azmi and
Ayush Agrawal
},
year = {2022},
timestamp = {Sun, 08 May 2022},
}
```
|
yily/glm-nwfe-sft-50000
|
yily
| 2023-12-04T06:57:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-04T06:53:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
praveenkrjha/Testmodel002
|
praveenkrjha
| 2023-12-04T06:57:06Z | 0 | 0 | null |
[
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] | null | 2023-12-04T06:55:08Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
---
This is a sample model that actually does nothing.
|
athirdpath/CleverMage-11b
|
athirdpath
| 2023-12-04T06:55:36Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T03:39:15Z |
---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
<p align="center"><font size="6"> <b>Also showing off my LoRA.</b></font></p>
<p align="center"><font size="4"> <b>This guy is fun to talk to, if the occult is your thing.</b></font></p>
<p align="center"><font size="5"> <b>4-bit Examples with LoRA (min_p, alpaca)</b></font></p>
<p align="center"><img src="https://iili.io/JzsmBWv.png"/>
<p align="center"><img src="https://iili.io/JzsmqzJ.png"/>
<p align="center"><font size="5"> <b>4-bit Examples without LoRA (min_p, chatML)</b></font></p>
<p align="center"><img src="https://iili.io/JzsmKba.png"/>
<p align="center"><img src="https://iili.io/JzsmCsR.png"/>
A 11b Mistral model, based on the NeverSleep recipe.
### Recipe
slices
- sources:
-
- model: NeverSleep/Noromaid-7b-v0.1.1
-
layer_range: [0, 24]
- sources:
-
- model: chargoddard/loyal-piano-m7
-
layer_range: [8, 32]
merge_method: passthrough
|
TigerResearch/tigerbot-13b-chat-v4
|
TigerResearch
| 2023-12-04T06:52:07Z | 27 | 6 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-23T05:37:40Z |
---
license: apache-2.0
language:
- zh
- en
---
<div style="width: 100%;">
<p align="center" width="20%">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" width="20%", style="display: block; margin: auto;"></img>
</p>
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
💻<a href="https://github.com/TigerResearch/TigerBot" target="_blank">Github</a> • 🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
# 快速开始
- 方法1,通过transformers使用
- 下载 TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- 启动infer代码
```shell
python infer.py --model_path TigerResearch/tigerbot-13b-chat-v4
```
- 方法2:
- 下载 TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- 安装git lfs: `git lfs install`
- 通过huggingface或modelscope平台下载权重
```shell
git clone https://huggingface.co/TigerResearch/tigerbot-13b-chat-v4
git clone https://www.modelscope.cn/TigerResearch/tigerbot-13b-chat-v4.git
```
- 启动infer代码
```shell
python infer.py --model_path tigerbot-13b-chat-v4
```
------
# Quick Start
- Method 1, use through transformers
- Clone TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- Run infer script
```shell
python infer.py --model_path TigerResearch/tigerbot-13b-chat-v4
```
- Method 2:
- Clone TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- install git lfs: `git lfs install`
- Download weights from huggingface or modelscope
```shell
git clone https://huggingface.co/TigerResearch/tigerbot-13b-chat-v4
git clone https://www.modelscope.cn/TigerResearch/tigerbot-13b-chat-v4.git
```
- Run infer script
```shell
python infer.py --model_path tigerbot-13b-chat-v4
```
|
yily/glm-nwfe-sft-5000
|
yily
| 2023-12-04T06:47:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-04T06:46:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
ShahzebA/llama2-qlora-finetuned-RomanUrdu
|
ShahzebA
| 2023-12-04T06:33:27Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-12-04T06:32:39Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
TusharsinghBaghel/software_lab_billsum_model
|
TusharsinghBaghel
| 2023-12-04T06:21:35Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-03T22:44:23Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: software_lab_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# software_lab_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5173
- Rouge1: 0.1422
- Rouge2: 0.0516
- Rougel: 0.1174
- Rougelsum: 0.1175
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8088 | 0.1242 | 0.034 | 0.1027 | 0.1028 | 19.0 |
| No log | 2.0 | 124 | 2.6031 | 0.1335 | 0.0437 | 0.1112 | 0.1113 | 19.0 |
| No log | 3.0 | 186 | 2.5356 | 0.1394 | 0.0487 | 0.115 | 0.1149 | 19.0 |
| No log | 4.0 | 248 | 2.5173 | 0.1422 | 0.0516 | 0.1174 | 0.1175 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
acctouhou/alpaca-lora-65b
|
acctouhou
| 2023-12-04T06:19:57Z | 0 | 0 | null |
[
"safetensors",
"license:mit",
"region:us"
] | null | 2023-12-04T04:06:34Z |
---
license: mit
---
This is just a test based on the lora 65b model. Used for the MIT NLP class final project.
Then there will be three steps:
- Calculate and accumulate gradients
- Determine the appropriate rank through gradient computation
- Perform LORA fine-tuning.
## LORA fine-tuning
For 24G VRAM on GPT2_SM model (Original version of Lora)
```
python main.py --train_batch_size 8 --valid_batch_size 8 --grad_acc 1 --model_card gpt2.SM --init_checkpoint pretrained_checkpoints/gpt2-pytorch_model.bin --work_dir alpha_sm --index 0
```
For 24G VRAM on GPT2_SM model (Our version of Lora)
```
python main.py --train_batch_size 8 --valid_batch_size 8 --grad_acc 1 --model_card gpt2.SM --init_checkpoint pretrained_checkpoints/gpt2-pytorch_model.bin --work_dir alpha_sm --index 1
```
---
license: mit
---
|
jackyk07/ElderGPT
|
jackyk07
| 2023-12-04T06:12:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-11-28T21:40:45Z |
Model Description
ElderGPT is an AI application tailored to enhance mobile technology accessibility for the elderly. This model aims to simplify smartphone usage, addressing the challenges many seniors face with standard mobile applications. ElderGPT serves as a one-stop solution, integrating with various applications and leveraging large language models to facilitate easier information extraction.
Intended Use
Target Audience: Primarily designed for senior who find smartphone applications challenging to navigate.
Applications: ElderGPT integrates APIs for functions like maps, news, reminders, and more, providing a simplified, voice-activated interface.
Model Details
Model Architecture: Fine-tuned using LoRA on a Llama2-7b model.
Training Approach: Fine-tuning with 30 instruct-response pairs for each functionality, focusing on the interests and needs of older adults.
Dataset: Specific datasets prepared for functionalities like food delivery, news, navigation, and reminders.
Hyperparameters and Training
Optimal hyperparameters include fewer epochs, smaller batch sizes, and a lower learning rate.
Plans for further development include more API support, additional demonstration data, and training a reward model for RLHF.
Accessibility
LangChain code hosted on GitHub.
|
aisensiy/Qwen-72B-Chat-GGUF
|
aisensiy
| 2023-12-04T06:04:40Z | 0 | 17 | null |
[
"license:mit",
"region:us"
] | null | 2023-12-03T04:33:13Z |
---
license: mit
---
## How to convert
First, you need git clone [llama.cpp](https://github.com/ggerganov/llama.cpp) and make it.
Then follow the instrution to generate gguf files.
```
# convert Qwen HF models to gguf fp16 format
python convert-hf-to-gguf.py --outfile qwen7b-chat-f16.gguf --outtype f16 Qwen-7B-Chat
# quantize the model to 4-bits (using q4_0 method)
./quantize qwen7b-chat-f16.gguf qwen7b-chat-q4_0.gguf q4_0
# chat with Qwen models
./main -m qwen7b-chat-q4_0.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
```
## Files are split and require joining
**Note:** HF does not support uploading files larger than 50GB but upload a 41GB file is too hard for me. Therefore I have uploaded the Q4_0 by splitting it of 5GB per file.
To join the files, do the following:
Linux and macOS:
```
cat qwen72b-chat-q4_0.gguf-split-* >qwen72b-chat-q4_0.gguf && rm qwen72b-chat-q4_0.gguf-split-*
```
Windows:
```
copy /B qwen72b-chat-q4_0.gguf-split-aa + qwen72b-chat-q4_0.gguf-split-ab + qwen72b-chat-q4_0.gguf-split-ac + qwen72b-chat-q4_0.gguf-split-ad + qwen72b-chat-q4_0.gguf-split-ae + qwen72b-chat-q4_0.gguf-split-af + qwen72b-chat-q4_0.gguf-split-ag + qwen72b-chat-q4_0.gguf-split-ah qwen72b-chat-q4_0.gguf
```
|
sunny2309/bert-finetuned-for-ner
|
sunny2309
| 2023-12-04T06:03:20Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-04T05:47:02Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-for-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.773250913177859
- name: Recall
type: recall
value: 0.7914869140063273
- name: F1
type: f1
value: 0.7822626492325185
- name: Accuracy
type: accuracy
value: 0.9492727917701312
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-for-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1653
- Precision: 0.7733
- Recall: 0.7915
- F1: 0.7823
- Accuracy: 0.9493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 125 | 0.2616 | 0.6787 | 0.7156 | 0.6966 | 0.9261 |
| No log | 2.0 | 250 | 0.1916 | 0.7397 | 0.7650 | 0.7522 | 0.9411 |
| No log | 3.0 | 375 | 0.1653 | 0.7733 | 0.7915 | 0.7823 | 0.9493 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
pavpanda/kt-ss1
|
pavpanda
| 2023-12-04T06:01:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-04T05:52:56Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kt-ss1 Dreambooth model trained by pavpanda with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ambarishnarayan/videomae-base-finetuned-ucf101-subset
|
ambarishnarayan
| 2023-12-04T05:47:28Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-12-04T02:32:48Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7813
- Accuracy: 0.8549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1620
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.128 | 0.17 | 270 | 0.7532 | 0.7295 |
| 0.004 | 1.17 | 540 | 0.9392 | 0.7971 |
| 0.964 | 2.17 | 810 | 0.8220 | 0.8357 |
| 0.0024 | 3.17 | 1080 | 0.8664 | 0.8357 |
| 0.0051 | 4.17 | 1350 | 0.9912 | 0.7826 |
| 0.3863 | 5.17 | 1620 | 0.6859 | 0.8647 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Jarnails1559/misrael_model
|
Jarnails1559
| 2023-12-04T05:43:12Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:alexsherstinsky/Mistral-7B-v0.1-sharded",
"base_model:adapter:alexsherstinsky/Mistral-7B-v0.1-sharded",
"region:us"
] | null | 2023-12-04T05:27:54Z |
---
library_name: peft
base_model: alexsherstinsky/Mistral-7B-v0.1-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.3.dev0
|
Parth673/ppo-LunarLander-v2
|
Parth673
| 2023-12-04T05:32:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-02T10:26:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.46 +/- 13.81
name: mean_reward
verified: false
---
|
pavpanda/kt-s1
|
pavpanda
| 2023-12-04T05:30:25Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-04T05:22:34Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### kt-s1 Dreambooth model trained by pavpanda with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Parth673/Taxi-v3
|
Parth673
| 2023-12-04T05:27:58Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-04T05:26:54Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
## Crazy Taxi
Pick up the peeps and deliver them to their destination - simples ;)
|
ThuyNT03/KLTN_COQE_viT5_PSOAL_v2
|
ThuyNT03
| 2023-12-04T05:22:39Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-04T02:41:07Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_PSOAL_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_PSOAL_v2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
monsterapi/mistral_7b_norobots
|
monsterapi
| 2023-12-04T05:20:48Z | 3 | 4 |
peft
|
[
"peft",
"code",
"instruct",
"mistral",
"dataset:HuggingFaceH4/no_robots",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-11-22T05:48:15Z |
---
library_name: peft
tags:
- code
- instruct
- mistral
datasets:
- HuggingFaceH4/no_robots
base_model: mistralai/Mistral-7B-v0.1
license: apache-2.0
---
### Finetuning Overview:
**Model Used:** mistralai/Mistral-7B-v0.1
**Dataset:** HuggingFaceH4/no_robots
#### Dataset Insights:
[No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.
#### Finetuning Details:
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
- Was achieved with great cost-effectiveness.
- Completed in a total duration of 1h 15m 3s for 2 epochs using an A6000 48GB GPU.
- Costed `$2.525` for the entire 2 epochs.
#### Hyperparameters & Additional Details:
- **Epochs:** 2
- **Cost Per Epoch:** $1.26
- **Total Finetuning Cost:** $2.525
- **Model Path:** mistralai/Mistral-7B-v0.1
- **Learning Rate:** 0.0002
- **Data Split:** 100% train
- **Gradient Accumulation Steps:** 64
- **lora r:** 64
- **lora alpha:** 16
#### Prompt Structure
```
<|system|> </s> <|user|> [USER PROMPT] </s> <|assistant|> [ASSISTANT ANSWER] </s>
```
#### Train loss :

### Benchmarking results :

---
license: apache-2.0
|
Asheron/SoccerTwosWSL2
|
Asheron
| 2023-12-04T05:18:06Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-12-04T05:11:55Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Asheron/SoccerTwosWSL2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SYH99999/checkpoint-390
|
SYH99999
| 2023-12-04T05:15:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-04T05:14:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
codegood/MistralLite_SCQA
|
codegood
| 2023-12-04T04:52:21Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:amazon/MistralLite",
"base_model:adapter:amazon/MistralLite",
"region:us"
] | null | 2023-12-04T04:18:28Z |
---
library_name: peft
base_model: amazon/MistralLite
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
brettbbb/llama_finetune_mc_20
|
brettbbb
| 2023-12-04T04:29:08Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-04T03:11:13Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama_finetune_mc_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_finetune_mc_20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9393 | 1.0 | 70 | 1.8240 |
| 0.2414 | 2.0 | 140 | 2.5937 |
| 0.126 | 3.0 | 210 | 2.8429 |
| 0.0836 | 4.0 | 280 | 2.9219 |
| 0.0872 | 5.0 | 350 | 3.2516 |
| 0.0575 | 6.0 | 420 | 3.1180 |
| 0.0482 | 7.0 | 490 | 3.4019 |
| 0.0356 | 8.0 | 560 | 3.3709 |
| 0.0305 | 9.0 | 630 | 3.5186 |
| 0.0272 | 10.0 | 700 | 3.8218 |
| 0.0243 | 11.0 | 770 | 3.7827 |
| 0.0312 | 12.0 | 840 | 3.9016 |
| 0.0259 | 13.0 | 910 | 4.0432 |
| 0.027 | 14.0 | 980 | 4.1255 |
| 0.0205 | 15.0 | 1050 | 4.1950 |
| 0.0199 | 16.0 | 1120 | 4.2793 |
| 0.0219 | 17.0 | 1190 | 4.3363 |
| 0.0197 | 18.0 | 1260 | 4.3627 |
| 0.0218 | 19.0 | 1330 | 4.3868 |
| 0.0201 | 20.0 | 1400 | 4.4015 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
athirdpath/NeuralHermes-11b
|
athirdpath
| 2023-12-04T04:04:28Z | 18 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T03:30:51Z |
---
license: apache-2.0
---
A 11b Mistral model, based on the NeverSleep recipe.
### Recipe
slices
- sources:
-
- model: Intel/neural-chat-7b-v3-1
-
layer_range: [0, 24]
- sources:
-
- model: teknium/OpenHermes-2.5-Mistral-7B
-
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
|
Broomva/t5-base-translation-spa-pbb
|
Broomva
| 2023-12-04T03:59:40Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-04T03:00:48Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-base-translation-spa-pbb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-translation-spa-pbb
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2411
- Bleu: 0.608
- Gen Len: 8.108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.6692 | 1.0 | 304 | 2.9825 | 0.8944 | 6.2582 |
| 2.6593 | 2.0 | 608 | 2.7422 | 0.0 | 6.9895 |
| 2.5452 | 3.0 | 912 | 2.6276 | 0.0 | 7.1924 |
| 2.5998 | 4.0 | 1216 | 2.5437 | 0.0 | 7.3347 |
| 3.0987 | 5.0 | 1520 | 2.4819 | 0.0 | 7.5204 |
| 2.3259 | 6.0 | 1824 | 2.4409 | 0.0 | 7.4466 |
| 3.2006 | 7.0 | 2128 | 2.3988 | 0.6694 | 7.4058 |
| 1.989 | 8.0 | 2432 | 2.3669 | 0.6097 | 8.1383 |
| 2.3702 | 9.0 | 2736 | 2.3464 | 0.9537 | 8.1542 |
| 2.3841 | 10.0 | 3040 | 2.3434 | 0.9045 | 7.7852 |
| 2.2193 | 11.0 | 3344 | 2.3119 | 0.9082 | 8.22 |
| 2.4414 | 12.0 | 3648 | 2.2997 | 0.791 | 8.2569 |
| 1.8003 | 13.0 | 3952 | 2.2848 | 1.0315 | 8.2055 |
| 1.9862 | 14.0 | 4256 | 2.2756 | 0.6622 | 8.2134 |
| 2.3814 | 15.0 | 4560 | 2.2678 | 0.6688 | 8.1634 |
| 2.145 | 16.0 | 4864 | 2.2606 | 0.8214 | 8.2754 |
| 2.1513 | 17.0 | 5168 | 2.2605 | 1.0985 | 8.2635 |
| 2.249 | 18.0 | 5472 | 2.2506 | 1.0695 | 8.1726 |
| 2.3972 | 19.0 | 5776 | 2.2477 | 0.663 | 8.22 |
| 2.1375 | 20.0 | 6080 | 2.2458 | 0.612 | 8.1515 |
| 2.4343 | 21.0 | 6384 | 2.2451 | 0.6825 | 8.1871 |
| 2.9682 | 22.0 | 6688 | 2.2361 | 0.6095 | 8.2306 |
| 1.8138 | 23.0 | 6992 | 2.2411 | 0.608 | 8.108 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Abe13/zephyr-7b-sft-lora
|
Abe13
| 2023-12-04T03:54:36Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-03T04:47:59Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-lora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0226 | 0.67 | 68 | 1.0242 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
erixhug/swin-base-patch4-window7-224-finetuned-lora-scenes
|
erixhug
| 2023-12-04T03:50:09Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/swin-base-patch4-window7-224",
"base_model:adapter:microsoft/swin-base-patch4-window7-224",
"region:us"
] | null | 2023-12-04T03:13:52Z |
---
library_name: peft
base_model: microsoft/swin-base-patch4-window7-224
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
deepghs/anime_style_ages
|
deepghs
| 2023-12-04T03:49:57Z | 0 | 4 | null |
[
"onnx",
"art",
"image-classification",
"dataset:deepghs/anime_style_ages",
"license:openrail",
"region:us"
] |
image-classification
| 2023-12-02T22:33:38Z |
---
license: openrail
metrics:
- accuracy
pipeline_tag: image-classification
tags:
- art
datasets:
- deepghs/anime_style_ages
---
| Name | FLOPS | Params | Accuracy | AUC | Confusion | Labels |
|:-------------------:|:-------:|:--------:|:----------:|:------:|:-------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------:|
| caformer_s36_v0 | 22.10G | 37.22M | 71.03% | 0.9271 | [confusion](https://huggingface.co/deepghs/anime_style_ages/blob/main/caformer_s36_v0/plot_confusion.png) | `1970s-`, `1980s`, `1990s`, `2000s`, `2010s`, `2015s`, `2020s` |
| mobilenetv3_v0_dist | 0.63G | 4.18M | 65.74% | 0.9053 | [confusion](https://huggingface.co/deepghs/anime_style_ages/blob/main/mobilenetv3_v0_dist/plot_confusion.png) | `1970s-`, `1980s`, `1990s`, `2000s`, `2010s`, `2015s`, `2020s` |
|
sglasher/van-gogh-stable-diffusion
|
sglasher
| 2023-12-04T03:47:21Z | 12 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-04T03:12:35Z |
---
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
---
|
austin/medication-single-t5
|
austin
| 2023-12-04T03:44:38Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/t5-efficient-small",
"base_model:finetune:google/t5-efficient-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-04T02:50:14Z |
---
license: apache-2.0
base_model: google/t5-efficient-small
tags:
- generated_from_trainer
model-index:
- name: medication-single-t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medication-single-t5
This model is a fine-tuned version of [google/t5-efficient-small](https://huggingface.co/google/t5-efficient-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.004
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5257 | 0.08 | 100 | 0.2084 |
| 0.1412 | 0.16 | 200 | 0.0880 |
| 0.0902 | 0.23 | 300 | 0.0543 |
| 0.0791 | 0.31 | 400 | 0.0456 |
| 0.072 | 0.39 | 500 | 0.0392 |
| 0.0567 | 0.47 | 600 | 0.0349 |
| 0.0507 | 0.55 | 700 | 0.0312 |
| 0.0493 | 0.63 | 800 | 0.0285 |
| 0.041 | 0.7 | 900 | 0.0246 |
| 0.0423 | 0.78 | 1000 | 0.0255 |
| 0.0382 | 0.86 | 1100 | 0.0247 |
| 0.0375 | 0.94 | 1200 | 0.0217 |
| 0.0298 | 1.02 | 1300 | 0.0211 |
| 0.0327 | 1.09 | 1400 | 0.0198 |
| 0.0272 | 1.17 | 1500 | 0.0195 |
| 0.0301 | 1.25 | 1600 | 0.0183 |
| 0.0259 | 1.33 | 1700 | 0.0179 |
| 0.0273 | 1.41 | 1800 | 0.0164 |
| 0.0244 | 1.49 | 1900 | 0.0163 |
| 0.0222 | 1.56 | 2000 | 0.0161 |
| 0.0214 | 1.64 | 2100 | 0.0158 |
| 0.0199 | 1.72 | 2200 | 0.0146 |
| 0.0202 | 1.8 | 2300 | 0.0141 |
| 0.0214 | 1.88 | 2400 | 0.0135 |
| 0.018 | 1.95 | 2500 | 0.0134 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.7
- Tokenizers 0.14.1
|
ThuyNT03/KLTN_COQE_viT5_ASOPL_v2
|
ThuyNT03
| 2023-12-04T03:38:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-04T02:52:39Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_ASOPL_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_ASOPL_v2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
ThuyNT03/KLTN_COQE_viT5_SOAPL_v2
|
ThuyNT03
| 2023-12-04T03:34:12Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-03T18:16:48Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_SOAPL_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_SOAPL_v2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
Puluming/AISquare-Instruct-llama2-koen-13b-v0.9.18
|
Puluming
| 2023-12-04T03:22:36Z | 2,253 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T03:08:44Z |
---
license: cc-by-nc-sa-4.0
---
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow19
|
FounderOfHuggingface
| 2023-12-04T03:20:28Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T03:20:26Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Broomva/bart-large-translation-spa-pbb
|
Broomva
| 2023-12-04T03:11:51Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-04T02:56:04Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: bart-large-translation-spa-pbb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-translation-spa-pbb
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6252
- Bleu: 0.233
- Gen Len: 11.0184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.6025 | 1.0 | 304 | 3.0281 | 0.0 | 7.7339 |
| 3.694 | 2.0 | 608 | 2.8050 | 0.0 | 5.3307 |
| 2.3214 | 3.0 | 912 | 2.6729 | 0.0 | 11.5929 |
| 2.0 | 4.0 | 1216 | 2.6280 | 0.4389 | 10.8669 |
| 2.0676 | 5.0 | 1520 | 2.6142 | 1.5675 | 9.6904 |
| 1.8422 | 6.0 | 1824 | 2.6252 | 0.233 | 11.0184 |
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
dolphinz/ccm
|
dolphinz
| 2023-12-04T03:10:13Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-03T14:04:32Z |
---
license: creativeml-openrail-m
---
|
Yaxin1992/llama2-13b-leagues-4000-nojson
|
Yaxin1992
| 2023-12-04T03:06:58Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:finetune:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2023-12-04T01:19:08Z |
---
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-13b-leagues-4000-nojson
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-13b-leagues-4000-nojson
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
annabellehuether/topic-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
|
annabellehuether
| 2023-12-04T02:59:32Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T01:56:12Z |
---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8106
- Accuracy: 0.7792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1459 | 1.0 | 660 | 0.7969 | 0.7536 |
| 0.6979 | 2.0 | 1320 | 0.7465 | 0.7766 |
| 0.5716 | 3.0 | 1980 | 0.7352 | 0.7821 |
| 0.3391 | 4.0 | 2640 | 0.7701 | 0.7855 |
| 0.2815 | 5.0 | 3300 | 0.8106 | 0.7792 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
annabellehuether/topic-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
|
annabellehuether
| 2023-12-04T02:58:10Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T02:20:35Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-bert-base-uncased-supreme-court-32batch_3epoch_5e5lr_01wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8509
- Accuracy: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2074 | 1.0 | 660 | 0.8971 | 0.7203 |
| 0.7281 | 2.0 | 1320 | 0.8299 | 0.7406 |
| 0.5553 | 3.0 | 1980 | 0.8509 | 0.7458 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
annabellehuether/topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
|
annabellehuether
| 2023-12-04T02:57:20Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T01:54:26Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9095
- Accuracy: 0.7392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3067 | 1.0 | 660 | 0.9220 | 0.7103 |
| 0.8105 | 2.0 | 1320 | 0.8366 | 0.7384 |
| 0.6656 | 3.0 | 1980 | 0.8202 | 0.7425 |
| 0.4105 | 4.0 | 2640 | 0.8823 | 0.7384 |
| 0.3359 | 5.0 | 3300 | 0.9095 | 0.7392 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
PhaniRajT/mistral-finetuned-mental_health
|
PhaniRajT
| 2023-12-04T02:54:32Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-04T02:05:26Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-finetuned-mental_health
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-mental_health
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
gianyrox/Test1DreamBoothWithMorePicsSteps200
|
gianyrox
| 2023-12-04T02:52:06Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-04T02:42:40Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of a Dr Seuss picture
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - gianyrox/Test1DreamBoothWithMorePicsSteps200
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of a Dr Seuss picture using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow15
|
FounderOfHuggingface
| 2023-12-04T02:51:47Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T02:51:44Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
annabellehuether/topic-bert-base-uncased-supreme-court-32batch_3epoch_3e5lr_01wd
|
annabellehuether
| 2023-12-04T02:42:23Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T02:04:30Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-bert-base-uncased-supreme-court-32batch_3epoch_3e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-bert-base-uncased-supreme-court-32batch_3epoch_3e5lr_01wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8310
- Accuracy: 0.7358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2377 | 1.0 | 660 | 0.8947 | 0.7169 |
| 0.7602 | 2.0 | 1320 | 0.8383 | 0.7399 |
| 0.6124 | 3.0 | 1980 | 0.8310 | 0.7358 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
alexsung/Reinforce-CartPole8
|
alexsung
| 2023-12-04T02:41:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-04T02:41:41Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 176.70 +/- 18.80
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Aaron-96/multiNERD_fine-tuned_only_English_roberta
|
Aaron-96
| 2023-12-04T02:41:10Z | 3 | 1 |
span-marker
|
[
"span-marker",
"pytorch",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"model-index",
"region:us"
] |
token-classification
| 2023-12-04T02:08:01Z |
---
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
metrics:
- precision
- recall
- f1
widget:
- text: The Bengal tiger is the most common subspecies of tiger, constituting approximately
80% of the entire tiger population, and is found in Bangladesh, Bhutan, Myanmar,
Nepal, and India.
- text: In other countries, it is a non-commissioned rank (e.g. Spain, Italy, France,
the Netherlands and the Indonesian Police ranks).
- text: The filling consists of fish, pork and bacon, and is seasoned with salt (unless
the pork is already salted).
- text: This stood until August 20, 1993 when it was beaten by one 1 / 100th of a
second by Colin Jackson of Great Britain in Stuttgart, Germany, a subsequent record
that stood for 13 years.
- text: Ann Patchett ’s novel " Bel Canto ", was another creative influence that helped
her manage a plentiful cast of characters.
pipeline_tag: token-classification
model-index:
- name: SpanMarker
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: Unknown
type: unknown
split: eval
metrics:
- type: f1
value: 0.9130661114003124
name: F1
- type: precision
value: 0.9148758606300855
name: Precision
- type: recall
value: 0.9112635078969243
name: Recall
---
# SpanMarker
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition.
## Model Details
### Model Description
- **Model Type:** SpanMarker
<!-- - **Encoder:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 6 words
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------|
| ANIM | "vertebrate", "moth", "G. firmus" |
| BIO | "Aspergillus", "Cladophora", "Zythiostroma" |
| CEL | "pulsar", "celestial bodies", "neutron star" |
| DIS | "social anxiety disorder", "insulin resistance", "Asperger syndrome" |
| EVE | "Spanish Civil War", "National Junior Angus Show", "French Revolution" |
| FOOD | "Neera", "Bellini ( cocktail )", "soju" |
| INST | "Apple II", "Encyclopaedia of Chess Openings", "Android" |
| LOC | "Kīlauea", "Hungary", "Vienna" |
| MEDIA | "CSI : Crime Scene Investigation", "Big Comic Spirits", "American Idol" |
| MYTH | "Priam", "Oźwiena", "Odysseus" |
| ORG | "San Francisco Giants", "Arm Holdings", "RTÉ One" |
| PER | "Amelia Bence", "Tito Lusiardo", "James Cameron" |
| PLANT | "vernal squill", "Sarracenia purpurea", "Drosera rotundifolia" |
| TIME | "prehistory", "Age of Enlightenment", "annual paid holiday" |
| VEHI | "Short 360", "Ferrari 355 Challenge", "Solution F / Chretien Helicopter" |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("Ann Patchett ’s novel \" Bel Canto \", was another creative influence that helped her manage a plentiful cast of characters.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")
```
</details>
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 2 | 21.6493 | 237 |
| Entities per sentence | 0 | 1.5369 | 36 |
### Training Hyperparameters
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:-----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.0576 | 1000 | 0.0142 | 0.8714 | 0.7729 | 0.8192 | 0.9698 |
| 0.1153 | 2000 | 0.0107 | 0.8316 | 0.8815 | 0.8558 | 0.9744 |
| 0.1729 | 3000 | 0.0092 | 0.8717 | 0.8797 | 0.8757 | 0.9780 |
| 0.2306 | 4000 | 0.0082 | 0.8811 | 0.8886 | 0.8848 | 0.9798 |
| 0.2882 | 5000 | 0.0084 | 0.8523 | 0.9163 | 0.8831 | 0.9790 |
| 0.3459 | 6000 | 0.0079 | 0.8700 | 0.9113 | 0.8902 | 0.9802 |
| 0.4035 | 7000 | 0.0070 | 0.9107 | 0.8859 | 0.8981 | 0.9822 |
| 0.4611 | 8000 | 0.0069 | 0.9259 | 0.8797 | 0.9022 | 0.9827 |
| 0.5188 | 9000 | 0.0067 | 0.9061 | 0.8965 | 0.9013 | 0.9829 |
| 0.5764 | 10000 | 0.0066 | 0.9034 | 0.8996 | 0.9015 | 0.9829 |
| 0.6341 | 11000 | 0.0064 | 0.9160 | 0.8996 | 0.9077 | 0.9839 |
| 0.6917 | 12000 | 0.0066 | 0.8952 | 0.9121 | 0.9036 | 0.9832 |
| 0.7494 | 13000 | 0.0062 | 0.9165 | 0.9009 | 0.9086 | 0.9841 |
| 0.8070 | 14000 | 0.0062 | 0.9010 | 0.9121 | 0.9065 | 0.9835 |
| 0.8647 | 15000 | 0.0062 | 0.9084 | 0.9127 | 0.9105 | 0.9842 |
| 0.9223 | 16000 | 0.0060 | 0.9151 | 0.9098 | 0.9125 | 0.9846 |
| 0.9799 | 17000 | 0.0060 | 0.9149 | 0.9113 | 0.9131 | 0.9848 |
### Framework Versions
- Python: 3.8.16
- SpanMarker: 1.5.0
- Transformers: 4.29.0.dev0
- PyTorch: 1.10.1
- Datasets: 2.15.0
- Tokenizers: 0.13.2
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow14
|
FounderOfHuggingface
| 2023-12-04T02:40:11Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T02:40:09Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
aghent/copiapoasegmentation
|
aghent
| 2023-12-04T02:30:28Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-10-01T23:29:51Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: copiapoasegmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# copiapoasegmentation
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the aghent/copiapoa-semantic-v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1039
- Mean Iou: 0.0
- Mean Accuracy: nan
- Overall Accuracy: nan
- Accuracy Copiapoa: nan
- Iou Copiapoa: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Copiapoa | Iou Copiapoa |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------:|:------------:|
| 0.2444 | 0.01 | 20 | 5.0470 | 0.0 | nan | nan | nan | 0.0 |
| 0.3612 | 0.02 | 40 | 0.8679 | 0.0 | nan | nan | nan | 0.0 |
| 0.5271 | 0.03 | 60 | 0.8829 | 0.0 | nan | nan | nan | 0.0 |
| 0.0688 | 0.04 | 80 | 0.1301 | 0.0 | nan | nan | nan | 0.0 |
| 0.0651 | 0.05 | 100 | 0.1053 | 0.0 | nan | nan | nan | 0.0 |
| 0.1459 | 0.06 | 120 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.1192 | 0.07 | 140 | 0.1044 | 0.0 | nan | nan | nan | 0.0 |
| 0.1747 | 0.08 | 160 | 0.1068 | 0.0 | nan | nan | nan | 0.0 |
| 0.0807 | 0.09 | 180 | 0.1045 | 0.0 | nan | nan | nan | 0.0 |
| 0.0701 | 0.1 | 200 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.0909 | 0.11 | 220 | 0.1043 | 0.0 | nan | nan | nan | 0.0 |
| 0.0866 | 0.12 | 240 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.1688 | 0.13 | 260 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0664 | 0.14 | 280 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.1137 | 0.15 | 300 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.1783 | 0.16 | 320 | 0.1044 | 0.0 | nan | nan | nan | 0.0 |
| 0.1267 | 0.17 | 340 | 0.1049 | 0.0 | nan | nan | nan | 0.0 |
| 0.0606 | 0.18 | 360 | 0.1086 | 0.0 | nan | nan | nan | 0.0 |
| 0.0847 | 0.19 | 380 | 0.1065 | 0.0 | nan | nan | nan | 0.0 |
| 0.0734 | 0.2 | 400 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0302 | 0.21 | 420 | 0.1045 | 0.0 | nan | nan | nan | 0.0 |
| 0.0815 | 0.22 | 440 | 0.1062 | 0.0 | nan | nan | nan | 0.0 |
| 0.0639 | 0.23 | 460 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.1039 | 0.24 | 480 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.0703 | 0.25 | 500 | 0.1046 | 0.0 | nan | nan | nan | 0.0 |
| 0.1696 | 0.26 | 520 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.1308 | 0.27 | 540 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.0673 | 0.28 | 560 | 0.1070 | 0.0 | nan | nan | nan | 0.0 |
| 0.1913 | 0.29 | 580 | 0.1048 | 0.0 | nan | nan | nan | 0.0 |
| 0.0324 | 0.3 | 600 | 0.1043 | 0.0 | nan | nan | nan | 0.0 |
| 0.1178 | 0.31 | 620 | 0.1053 | 0.0 | nan | nan | nan | 0.0 |
| 0.0977 | 0.32 | 640 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.1711 | 0.33 | 660 | 0.1042 | 0.0 | nan | nan | nan | 0.0 |
| 0.1388 | 0.34 | 680 | 0.1059 | 0.0 | nan | nan | nan | 0.0 |
| 0.1434 | 0.35 | 700 | 0.1060 | 0.0 | nan | nan | nan | 0.0 |
| 0.0711 | 0.36 | 720 | 0.1075 | 0.0 | nan | nan | nan | 0.0 |
| 0.1017 | 0.37 | 740 | 0.1060 | 0.0 | nan | nan | nan | 0.0 |
| 0.2191 | 0.38 | 760 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0877 | 0.39 | 780 | 0.1042 | 0.0 | nan | nan | nan | 0.0 |
| 0.1571 | 0.4 | 800 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.0726 | 0.41 | 820 | 0.1043 | 0.0 | nan | nan | nan | 0.0 |
| 0.1566 | 0.42 | 840 | 0.1046 | 0.0 | nan | nan | nan | 0.0 |
| 0.1165 | 0.43 | 860 | 0.1069 | 0.0 | nan | nan | nan | 0.0 |
| 0.0921 | 0.44 | 880 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.1851 | 0.45 | 900 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.0553 | 0.46 | 920 | 0.1046 | 0.0 | nan | nan | nan | 0.0 |
| 0.2055 | 0.47 | 940 | 0.1056 | 0.0 | nan | nan | nan | 0.0 |
| 0.1784 | 0.48 | 960 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.0817 | 0.49 | 980 | 0.1045 | 0.0 | nan | nan | nan | 0.0 |
| 0.0789 | 0.5 | 1000 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.1644 | 0.51 | 1020 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.3311 | 0.52 | 1040 | 0.1045 | 0.0 | nan | nan | nan | 0.0 |
| 0.1518 | 0.53 | 1060 | 0.1045 | 0.0 | nan | nan | nan | 0.0 |
| 0.0654 | 0.54 | 1080 | 0.1049 | 0.0 | nan | nan | nan | 0.0 |
| 0.1069 | 0.55 | 1100 | 0.1043 | 0.0 | nan | nan | nan | 0.0 |
| 0.0489 | 0.56 | 1120 | 0.1044 | 0.0 | nan | nan | nan | 0.0 |
| 0.126 | 0.57 | 1140 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.076 | 0.58 | 1160 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0609 | 0.59 | 1180 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0781 | 0.6 | 1200 | 0.1047 | 0.0 | nan | nan | nan | 0.0 |
| 0.0471 | 0.61 | 1220 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0806 | 0.62 | 1240 | 0.1048 | 0.0 | nan | nan | nan | 0.0 |
| 0.0519 | 0.63 | 1260 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0904 | 0.64 | 1280 | 0.1051 | 0.0 | nan | nan | nan | 0.0 |
| 0.0963 | 0.65 | 1300 | 0.1051 | 0.0 | nan | nan | nan | 0.0 |
| 0.1206 | 0.66 | 1320 | 0.1053 | 0.0 | nan | nan | nan | 0.0 |
| 0.1104 | 0.67 | 1340 | 0.1045 | 0.0 | nan | nan | nan | 0.0 |
| 0.062 | 0.68 | 1360 | 0.1042 | 0.0 | nan | nan | nan | 0.0 |
| 0.0895 | 0.69 | 1380 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.1593 | 0.7 | 1400 | 0.1042 | 0.0 | nan | nan | nan | 0.0 |
| 0.0922 | 0.71 | 1420 | 0.1044 | 0.0 | nan | nan | nan | 0.0 |
| 0.0676 | 0.72 | 1440 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.0854 | 0.73 | 1460 | 0.1046 | 0.0 | nan | nan | nan | 0.0 |
| 0.0498 | 0.74 | 1480 | 0.1042 | 0.0 | nan | nan | nan | 0.0 |
| 0.0677 | 0.75 | 1500 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.1298 | 0.76 | 1520 | 0.1049 | 0.0 | nan | nan | nan | 0.0 |
| 0.1202 | 0.77 | 1540 | 0.1044 | 0.0 | nan | nan | nan | 0.0 |
| 0.0737 | 0.78 | 1560 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.0238 | 0.79 | 1580 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.106 | 0.8 | 1600 | 0.1042 | 0.0 | nan | nan | nan | 0.0 |
| 0.142 | 0.81 | 1620 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0753 | 0.82 | 1640 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.157 | 0.83 | 1660 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.1181 | 0.84 | 1680 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0758 | 0.85 | 1700 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.0966 | 0.86 | 1720 | 0.1041 | 0.0 | nan | nan | nan | 0.0 |
| 0.1137 | 0.87 | 1740 | 0.1043 | 0.0 | nan | nan | nan | 0.0 |
| 0.0362 | 0.88 | 1760 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.1495 | 0.89 | 1780 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0933 | 0.9 | 1800 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.1285 | 0.91 | 1820 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.0479 | 0.92 | 1840 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.1065 | 0.93 | 1860 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.1133 | 0.94 | 1880 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.129 | 0.95 | 1900 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.2114 | 0.96 | 1920 | 0.1040 | 0.0 | nan | nan | nan | 0.0 |
| 0.0646 | 0.97 | 1940 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.1375 | 0.98 | 1960 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.0402 | 0.99 | 1980 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
| 0.1113 | 1.0 | 2000 | 0.1039 | 0.0 | nan | nan | nan | 0.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
annabellehuether/topic-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
|
annabellehuether
| 2023-12-04T02:26:32Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T01:48:22Z |
---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-legal-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7220
- Accuracy: 0.7792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1483 | 1.0 | 660 | 0.7968 | 0.7555 |
| 0.7022 | 2.0 | 1320 | 0.7341 | 0.7770 |
| 0.5851 | 3.0 | 1980 | 0.7220 | 0.7792 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
annabellehuether/topic-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
|
annabellehuether
| 2023-12-04T02:21:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T01:44:33Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_1wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8250
- Accuracy: 0.7406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3056 | 1.0 | 660 | 0.9133 | 0.7095 |
| 0.814 | 2.0 | 1320 | 0.8417 | 0.7369 |
| 0.6802 | 3.0 | 1980 | 0.8250 | 0.7406 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
athirdpath/BigMistral-11b-GLUED
|
athirdpath
| 2023-12-04T02:17:21Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T01:25:31Z |
---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
<p align="center"><font size="7"> <b>Okay, here we fuckin' go.</b> </font></p>
<p align="center"><font size="5"> <b>Time to fire up the ol' dare_ties pod.</b></font></p>
<p align="center"><img src="https://iili.io/JzixYiP.png"/>
<p align="center"><font size="6"><b><a href="https://iili.io/Jzix7WB.png">NSFW - Erotic(?) Writing Example - NSFW</font></a></b></p>
<p align="center"><font size="3"> <b>(That's not what it's finetuned for, okay? He's a grower.)</b></font></p>
### Dataset
The 11b glue consists of:
- The entirety of HF No Robots.
- The entirety of TinyPixel/orca-mini
- Enough of the GPT-4 generated Alpaca dataset (randomly chosen) to make it a roughly even three-way split.
JSONL file of dataset available as a repo.
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow12
|
FounderOfHuggingface
| 2023-12-04T02:16:57Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T02:16:55Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
empbetty/tangyuan-dreambooth-3
|
empbetty
| 2023-12-04T02:16:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-04T01:39:25Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - empbetty/tangyuan-dreambooth-3
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
hydrochii/text_classify_model
|
hydrochii
| 2023-12-04T02:11:22Z | 6 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-27T00:13:43Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: text_classify_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93272
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classify_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1926
- Accuracy: 0.9327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.287 | 1.0 | 782 | 0.2120 | 0.9234 |
| 0.1344 | 2.0 | 1564 | 0.1926 | 0.9327 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
annabellehuether/topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
|
annabellehuether
| 2023-12-04T01:59:00Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T00:56:14Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9049
- Accuracy: 0.7414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3078 | 1.0 | 660 | 0.9307 | 0.7073 |
| 0.811 | 2.0 | 1320 | 0.8368 | 0.7429 |
| 0.6684 | 3.0 | 1980 | 0.8197 | 0.7406 |
| 0.4163 | 4.0 | 2640 | 0.8724 | 0.7443 |
| 0.34 | 5.0 | 3300 | 0.9049 | 0.7414 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
annabellehuether/topic-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
|
annabellehuether
| 2023-12-04T01:58:50Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T00:56:10Z |
---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-legal-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_01wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8083
- Accuracy: 0.7799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.146 | 1.0 | 660 | 0.7959 | 0.7525 |
| 0.6965 | 2.0 | 1320 | 0.7491 | 0.7688 |
| 0.5724 | 3.0 | 1980 | 0.7384 | 0.7807 |
| 0.3395 | 4.0 | 2640 | 0.7731 | 0.7847 |
| 0.2824 | 5.0 | 3300 | 0.8083 | 0.7799 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
aleckeithc/distilroberta-base-mrpc-glue-keith-alec
|
aleckeithc
| 2023-12-04T01:58:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T01:51:39Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mrpc-glue-keith-alec
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8529411764705882
- name: F1
type: f1
value: 0.8958333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-keith-alec
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6392
- Accuracy: 0.8529
- F1: 0.8958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3596 | 1.09 | 500 | 1.1183 | 0.8309 | 0.8848 |
| 0.3241 | 2.18 | 1000 | 0.6392 | 0.8529 | 0.8958 |
| 0.1673 | 3.27 | 1500 | 0.7843 | 0.8431 | 0.8869 |
| 0.0807 | 4.36 | 2000 | 0.9659 | 0.8456 | 0.8916 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
DownwardSpiral33/hands_palms_classifier
|
DownwardSpiral33
| 2023-12-04T01:54:39Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-03T14:58:25Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: DownwardSpiral33/hands_palms_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DownwardSpiral33/hands_palms_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4367
- Validation Loss: 0.7459
- Train Accuracy: 0.5806
- Epoch: 38
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 17400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6873 | 0.6761 | 0.6129 | 0 |
| 0.6720 | 0.6625 | 0.6452 | 1 |
| 0.6638 | 0.6577 | 0.6452 | 2 |
| 0.6634 | 0.6547 | 0.6774 | 3 |
| 0.6547 | 0.6507 | 0.6774 | 4 |
| 0.6556 | 0.6423 | 0.6774 | 5 |
| 0.6433 | 0.6346 | 0.6774 | 6 |
| 0.6394 | 0.6293 | 0.7097 | 7 |
| 0.6344 | 0.6239 | 0.7419 | 8 |
| 0.6205 | 0.6206 | 0.7742 | 9 |
| 0.6047 | 0.6115 | 0.7097 | 10 |
| 0.6163 | 0.5970 | 0.7419 | 11 |
| 0.6022 | 0.6069 | 0.7097 | 12 |
| 0.5958 | 0.6009 | 0.7419 | 13 |
| 0.5789 | 0.5971 | 0.6774 | 14 |
| 0.5758 | 0.5962 | 0.6774 | 15 |
| 0.5662 | 0.5976 | 0.6774 | 16 |
| 0.5579 | 0.5926 | 0.6774 | 17 |
| 0.5577 | 0.5811 | 0.6452 | 18 |
| 0.5474 | 0.5880 | 0.6452 | 19 |
| 0.5249 | 0.5921 | 0.6774 | 20 |
| 0.5412 | 0.6075 | 0.6774 | 21 |
| 0.5154 | 0.6266 | 0.7097 | 22 |
| 0.5199 | 0.6063 | 0.6129 | 23 |
| 0.5150 | 0.6054 | 0.5806 | 24 |
| 0.5199 | 0.6107 | 0.6774 | 25 |
| 0.4823 | 0.5959 | 0.6129 | 26 |
| 0.4800 | 0.6581 | 0.6452 | 27 |
| 0.4732 | 0.6620 | 0.6129 | 28 |
| 0.4766 | 0.6284 | 0.6129 | 29 |
| 0.4889 | 0.6978 | 0.5806 | 30 |
| 0.4530 | 0.6636 | 0.5806 | 31 |
| 0.4320 | 0.6348 | 0.6129 | 32 |
| 0.4704 | 0.6326 | 0.6774 | 33 |
| 0.4487 | 0.6937 | 0.6774 | 34 |
| 0.4382 | 0.6423 | 0.5806 | 35 |
| 0.4035 | 0.6926 | 0.5806 | 36 |
| 0.4330 | 0.7225 | 0.5484 | 37 |
| 0.4367 | 0.7459 | 0.5806 | 38 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow10
|
FounderOfHuggingface
| 2023-12-04T01:53:48Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T01:53:45Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
annabellehuether/topic-legal-bert-base-uncased-supreme-court-16batch_3epoch_2e5lr_01wd
|
annabellehuether
| 2023-12-04T01:52:04Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T01:12:28Z |
---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-legal-bert-base-uncased-supreme-court-16batch_3epoch_2e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-legal-bert-base-uncased-supreme-court-16batch_3epoch_2e5lr_01wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7456
- Accuracy: 0.7784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8618 | 1.0 | 1319 | 0.7770 | 0.7625 |
| 0.5796 | 2.0 | 2638 | 0.7247 | 0.7821 |
| 0.4043 | 3.0 | 3957 | 0.7456 | 0.7784 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
asas-ai/bloom_3B_4bit_qlora_flores_v2
|
asas-ai
| 2023-12-04T01:46:51Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
] | null | 2023-12-04T01:46:14Z |
---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_4bit_qlora_flores_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_4bit_qlora_flores_v2
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.4.0
- Tokenizers 0.15.0
|
SaiedAlshahrani/bloom_3B_4bit_qlora_flores_v2
|
SaiedAlshahrani
| 2023-12-04T01:46:17Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
] | null | 2023-12-04T00:50:09Z |
---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_4bit_qlora_flores_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_4bit_qlora_flores_v2
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.4.0
- Tokenizers 0.15.0
|
Poliandr/russian-cities
|
Poliandr
| 2023-12-04T01:45:08Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"ru",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-04T01:10:45Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: russian-cities
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6413043737411499
language:
- ru
---
# russian-cities
Эта модель призвана распознавать картинки пяти наиболее известных городов России: Москвы, Санкт-Петербурга, Калининграда, Екатеринбурга, Смоленска.
Модель обучена на 150 картинках для каждого города, найденных поисковой машиной по названию города.
## Примеры картинок из датасета
#### Kaliningrad

#### Moscow

#### Saint-Petersburg

#### Smolensk

#### Yekaterinburg

|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow9
|
FounderOfHuggingface
| 2023-12-04T01:42:12Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T01:42:08Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
VitaliiVrublevskyi/bert-large-cased-finetuned-mrpc
|
VitaliiVrublevskyi
| 2023-12-04T01:42:00Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-03T16:02:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-large-cased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8774509803921569
- name: F1
type: f1
value: 0.9134948096885814
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-mrpc
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4358
- Accuracy: 0.8775
- F1: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 26
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 115 | 0.4797 | 0.7966 | 0.8614 |
| No log | 2.0 | 230 | 0.4097 | 0.8358 | 0.8822 |
| No log | 3.0 | 345 | 0.3815 | 0.8529 | 0.8976 |
| No log | 4.0 | 460 | 0.3961 | 0.8652 | 0.9050 |
| 0.3944 | 5.0 | 575 | 0.4358 | 0.8775 | 0.9135 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
PhaniRajT/mistral-finetuned-samsum
|
PhaniRajT
| 2023-12-04T01:36:08Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-04T00:52:31Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Reglacia/Miyuki
|
Reglacia
| 2023-12-04T01:30:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:artistic-2.0",
"region:us"
] |
text-to-image
| 2023-12-04T01:23:38Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/IMG_1343.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: artistic-2.0
---
# Miyuki Izayoi
<Gallery />
## Model description
This is Miyuki Izayoi. She is a blader and a singer. She a beyblade oc for MFB
## Download model
[Download](/Reglacia/Miyuki/tree/main) them in the Files & versions tab.
|
ThuyNT03/KLTN_COQE_viT5_SAOPL
|
ThuyNT03
| 2023-12-04T01:30:05Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ThuyNT03/KLTN_COQE_viT5_SAOPL",
"base_model:finetune:ThuyNT03/KLTN_COQE_viT5_SAOPL",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-02T16:31:47Z |
---
license: mit
base_model: ThuyNT03/KLTN_COQE_viT5_SAOPL
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_SAOPL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_SAOPL
This model is a fine-tuned version of [ThuyNT03/KLTN_COQE_viT5_SAOPL](https://huggingface.co/ThuyNT03/KLTN_COQE_viT5_SAOPL) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow7
|
FounderOfHuggingface
| 2023-12-04T01:18:56Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T01:18:53Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
annabellehuether/partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_3e5lr_01wd
|
annabellehuether
| 2023-12-04T01:16:20Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T00:38:14Z |
---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_3e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-legal-bert-base-uncased-supreme-court-32batch_3epoch_3e5lr_01wd
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5928
- Accuracy: 0.6670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6562 | 1.0 | 660 | 0.5537 | 0.6585 |
| 0.6048 | 2.0 | 1320 | 0.5586 | 0.6615 |
| 0.5644 | 3.0 | 1980 | 0.5928 | 0.6670 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
LarryAIDraw/CHAR-AuraFrieren
|
LarryAIDraw
| 2023-12-04T01:11:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:03:23Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/217280/aura-or-frieren-beyond-journeys-end
|
LarryAIDraw/ServalLandauV2
|
LarryAIDraw
| 2023-12-04T01:10:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:01:31Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/157125/serval-landau-honkai-star-rail
|
LarryAIDraw/ShizukaV2
|
LarryAIDraw
| 2023-12-04T01:10:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-04T01:00:45Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/75924/shizuka-masou-rance-series
|
Kuwon/chkpt
|
Kuwon
| 2023-12-04T01:05:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:generator",
"base_model:monologg/koelectra-small-v3-discriminator",
"base_model:finetune:monologg/koelectra-small-v3-discriminator",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-01T04:04:08Z |
---
base_model: monologg/koelectra-small-v3-discriminator
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: chkpt
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8826086956521739
- name: F1
type: f1
value: 0.8275730495029622
- name: Precision
type: precision
value: 0.7789981096408317
- name: Recall
type: recall
value: 0.8826086956521739
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chkpt
This model is a fine-tuned version of [monologg/koelectra-small-v3-discriminator](https://huggingface.co/monologg/koelectra-small-v3-discriminator) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2815
- Accuracy: 0.8826
- F1: 0.8276
- Precision: 0.7790
- Recall: 0.8826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 29 | 1.2815 | 0.8826 | 0.8276 | 0.7790 | 0.8826 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ij5/pixel
|
ij5
| 2023-12-04T00:57:04Z | 9 | 3 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-04T00:56:46Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/girl.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# pixel
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/ij5/pixel/tree/main) them in the Files & versions tab.
|
kvriza8/blip2-opt-2.7b-AF-captions
|
kvriza8
| 2023-12-04T00:48:19Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Salesforce/blip2-opt-2.7b",
"base_model:adapter:Salesforce/blip2-opt-2.7b",
"region:us"
] | null | 2023-12-04T00:48:13Z |
---
library_name: peft
base_model: Salesforce/blip2-opt-2.7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
|
hkivancoral/smids_1x_deit_small_rms_00001_fold2
|
hkivancoral
| 2023-12-04T00:47:59Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-04T00:16:27Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_1x_deit_small_rms_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.870216306156406
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_1x_deit_small_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8494
- Accuracy: 0.8702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.391 | 1.0 | 75 | 0.3306 | 0.8569 |
| 0.2024 | 2.0 | 150 | 0.3078 | 0.8719 |
| 0.1659 | 3.0 | 225 | 0.3046 | 0.8636 |
| 0.1089 | 4.0 | 300 | 0.3233 | 0.8702 |
| 0.0832 | 5.0 | 375 | 0.4345 | 0.8552 |
| 0.0315 | 6.0 | 450 | 0.4227 | 0.8686 |
| 0.0247 | 7.0 | 525 | 0.5432 | 0.8652 |
| 0.0031 | 8.0 | 600 | 0.5857 | 0.8769 |
| 0.0058 | 9.0 | 675 | 0.5689 | 0.8619 |
| 0.0354 | 10.0 | 750 | 0.6368 | 0.8619 |
| 0.0193 | 11.0 | 825 | 0.5921 | 0.8752 |
| 0.0019 | 12.0 | 900 | 0.6514 | 0.8785 |
| 0.0447 | 13.0 | 975 | 0.6838 | 0.8686 |
| 0.0527 | 14.0 | 1050 | 0.6693 | 0.8735 |
| 0.0047 | 15.0 | 1125 | 0.6444 | 0.8735 |
| 0.0064 | 16.0 | 1200 | 0.7052 | 0.8719 |
| 0.0002 | 17.0 | 1275 | 0.7289 | 0.8636 |
| 0.0092 | 18.0 | 1350 | 0.7405 | 0.8669 |
| 0.0001 | 19.0 | 1425 | 0.7743 | 0.8619 |
| 0.0038 | 20.0 | 1500 | 0.7512 | 0.8686 |
| 0.0001 | 21.0 | 1575 | 0.8249 | 0.8602 |
| 0.0001 | 22.0 | 1650 | 0.7832 | 0.8686 |
| 0.0001 | 23.0 | 1725 | 0.8312 | 0.8636 |
| 0.0 | 24.0 | 1800 | 0.7877 | 0.8669 |
| 0.0 | 25.0 | 1875 | 0.7958 | 0.8719 |
| 0.0001 | 26.0 | 1950 | 0.7718 | 0.8752 |
| 0.0055 | 27.0 | 2025 | 0.7918 | 0.8686 |
| 0.0032 | 28.0 | 2100 | 0.8022 | 0.8735 |
| 0.0023 | 29.0 | 2175 | 0.8185 | 0.8735 |
| 0.0031 | 30.0 | 2250 | 0.8365 | 0.8735 |
| 0.0028 | 31.0 | 2325 | 0.7946 | 0.8686 |
| 0.0 | 32.0 | 2400 | 0.8222 | 0.8752 |
| 0.0 | 33.0 | 2475 | 0.7981 | 0.8719 |
| 0.0 | 34.0 | 2550 | 0.8313 | 0.8752 |
| 0.0084 | 35.0 | 2625 | 0.8895 | 0.8702 |
| 0.0 | 36.0 | 2700 | 0.8170 | 0.8686 |
| 0.0 | 37.0 | 2775 | 0.8344 | 0.8752 |
| 0.0 | 38.0 | 2850 | 0.8561 | 0.8735 |
| 0.0022 | 39.0 | 2925 | 0.8329 | 0.8702 |
| 0.0 | 40.0 | 3000 | 0.8473 | 0.8719 |
| 0.0026 | 41.0 | 3075 | 0.8354 | 0.8686 |
| 0.0 | 42.0 | 3150 | 0.8451 | 0.8735 |
| 0.0025 | 43.0 | 3225 | 0.8430 | 0.8735 |
| 0.0025 | 44.0 | 3300 | 0.8484 | 0.8719 |
| 0.0 | 45.0 | 3375 | 0.8461 | 0.8702 |
| 0.0 | 46.0 | 3450 | 0.8473 | 0.8735 |
| 0.0023 | 47.0 | 3525 | 0.8487 | 0.8719 |
| 0.0 | 48.0 | 3600 | 0.8492 | 0.8702 |
| 0.0022 | 49.0 | 3675 | 0.8491 | 0.8686 |
| 0.0022 | 50.0 | 3750 | 0.8494 | 0.8702 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow4
|
FounderOfHuggingface
| 2023-12-04T00:44:03Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T00:43:57Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Seongill/nq_mrc_cbr_checkpoints
|
Seongill
| 2023-12-04T00:37:15Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-03T05:15:30Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: nq_mrc_cbr_checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nq_mrc_cbr_checkpoints
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
platzi/platzi-vit-model-aleckeith
|
platzi
| 2023-12-04T00:34:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-03T22:22:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-vit-model-aleckeith
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9774436090225563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-aleckeith
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0621
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1238 | 3.85 | 500 | 0.0621 | 0.9774 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
afrideva/TinyMistral-248M-SFT-v3-GGUF
|
afrideva
| 2023-12-04T00:34:34Z | 10 | 0 | null |
[
"gguf",
"autotrain",
"text-generation",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"base_model:Felladrin/TinyMistral-248M-Chat-v2",
"base_model:quantized:Felladrin/TinyMistral-248M-Chat-v2",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-04T00:33:08Z |
---
base_model: Felladrin/TinyMistral-248M-SFT-v3
datasets:
- OpenAssistant/oasst_top1_2023-08-25
inference: false
license: apache-2.0
model_creator: Felladrin
model_name: TinyMistral-248M-SFT-v3
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- autotrain
- text-generation
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: '<|im_start|>user
Write the specs of a game about trolls and warriors in a fantasy world.<|im_end|>
<|im_start|>assistant
The game is an adventure game that takes place on a planet, where players must
explore their unique abilities to survive. Players can use different strategies
such as collecting items or trading them for gold or silver coins, but they also
need to learn how to deal with obstacles and find new ways to escape.<|im_end|>
<|im_start|>user
Could you tell me something curious about the Earth?<|im_end|>
<|im_start|>assistant
The planet is a large, rocky world with an atmosphere of 10 billion years old
and a surface area around 25 million miles (36 million kilometers) wide.<|im_end|>
<|im_start|>user
What are some potential applications for quantum computing?<|im_end|>
<|im_start|>assistant'
---
# Felladrin/TinyMistral-248M-SFT-v3-GGUF
Quantized GGUF model files for [TinyMistral-248M-SFT-v3](https://huggingface.co/Felladrin/TinyMistral-248M-SFT-v3) from [Felladrin](https://huggingface.co/Felladrin)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinymistral-248m-sft-v3.fp16.gguf](https://huggingface.co/afrideva/TinyMistral-248M-SFT-v3-GGUF/resolve/main/tinymistral-248m-sft-v3.fp16.gguf) | fp16 | 497.75 MB |
| [tinymistral-248m-sft-v3.q2_k.gguf](https://huggingface.co/afrideva/TinyMistral-248M-SFT-v3-GGUF/resolve/main/tinymistral-248m-sft-v3.q2_k.gguf) | q2_k | 116.20 MB |
| [tinymistral-248m-sft-v3.q3_k_m.gguf](https://huggingface.co/afrideva/TinyMistral-248M-SFT-v3-GGUF/resolve/main/tinymistral-248m-sft-v3.q3_k_m.gguf) | q3_k_m | 131.01 MB |
| [tinymistral-248m-sft-v3.q4_k_m.gguf](https://huggingface.co/afrideva/TinyMistral-248M-SFT-v3-GGUF/resolve/main/tinymistral-248m-sft-v3.q4_k_m.gguf) | q4_k_m | 156.60 MB |
| [tinymistral-248m-sft-v3.q5_k_m.gguf](https://huggingface.co/afrideva/TinyMistral-248M-SFT-v3-GGUF/resolve/main/tinymistral-248m-sft-v3.q5_k_m.gguf) | q5_k_m | 180.16 MB |
| [tinymistral-248m-sft-v3.q6_k.gguf](https://huggingface.co/afrideva/TinyMistral-248M-SFT-v3-GGUF/resolve/main/tinymistral-248m-sft-v3.q6_k.gguf) | q6_k | 205.20 MB |
| [tinymistral-248m-sft-v3.q8_0.gguf](https://huggingface.co/afrideva/TinyMistral-248M-SFT-v3-GGUF/resolve/main/tinymistral-248m-sft-v3.q8_0.gguf) | q8_0 | 265.26 MB |
## Original Model Card:
# Locutusque's TinyMistral-248M trained on OpenAssistant TOP-1 Conversation Threads
- Base model: [Locutusque/TinyMistral-248M](https://huggingface.co/Locutusque/TinyMistral-248M/blob/90b89d18fdf27937dc04ab8a9b543c5af2991c7f/README.md)
- Dataset: [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25)
## Recommended Prompt Format
```
<|im_start|>user
{message}<|im_end|>
<|im_start|>assistant
```
## How it was trained
```ipython
%pip install autotrain-advanced
!autotrain setup
!autotrain llm \
--train \
--trainer "sft" \
--model './TinyMistral-248M/' \
--model_max_length 4096 \
--block-size 1024 \
--project-name 'trained-model' \
--data-path "OpenAssistant/oasst_top1_2023-08-25" \
--train_split "train" \
--valid_split "test" \
--text-column "text" \
--lr 1e-5 \
--train_batch_size 2 \
--epochs 5 \
--evaluation_strategy "steps" \
--save-strategy "steps" \
--save-total-limit 2 \
--warmup-ratio 0.05 \
--weight-decay 0.0 \
--gradient-accumulation 8 \
--logging-steps 10 \
--scheduler "constant"
```
|
JoseGarcia2002/submodel-3
|
JoseGarcia2002
| 2023-12-04T00:33:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-04T00:29:29Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### submodel_3 Dreambooth model trained by JoseGarcia2002 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
FounderOfHuggingface/fresh_gpt2_lora_r16_dbpedia_14_t300_e5_non_member_shadow3
|
FounderOfHuggingface
| 2023-12-04T00:32:22Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-04T00:32:19Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.