modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-05 12:28:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-05 12:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
chanwit/flux-7b-v0.3 | chanwit | 2024-02-07T06:42:56Z | 9 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-18T17:54:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ubaskota/my_mlm_model_masked | ubaskota | 2024-02-07T06:36:05Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-07T06:10:23Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_mlm_model_masked
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_mlm_model_masked
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4563 | 1.0 | 7300 | 0.4420 |
| 0.434 | 2.0 | 14600 | 0.4119 |
| 0.4114 | 3.0 | 21900 | 0.4039 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
nry61/sdxl_businessWoman | nry61 | 2024-02-07T06:35:47Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-07T06:35:42Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks business woman hijab person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
anjith672/gate-boy | anjith672 | 2024-02-07T06:35:36Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-07T05:21:55Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of gb
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2 | yaneq | 2024-02-07T06:13:20Z | 1 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-07T06:13:16Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2
<Gallery />
## Model description
These are yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_bYSe9M1l0pUI1xnDnUr2_SDXL_LoRA_700_9d94_700_1e4_2/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 700
- learning_rate: 0.0001
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls: - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- gradient_accumulation_steps: 3
- GPU: T4
- duration: 5399.857093095779
|
Artefact2/Midnight-Rose-70B-v2.0.3-GGUF | Artefact2 | 2024-02-07T06:12:24Z | 322 | 13 | null | [
"gguf",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-02-06T23:07:00Z | ---
license: llama2
language:
- en
---
<img src="data:image/jpg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAsICAoIBwsKCQoNDAsNERwSEQ8PESIZGhQcKSQrKigkJyctMkA3LTA9MCcnOEw5PUNFSElIKzZPVU5GVEBHSEX/2wBDAQwNDREPESESEiFFLicuRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUX/wAARCAGAA70DASIAAhEBAxEB/8QAGwABAAMBAQEBAAAAAAAAAAAAAAECAwQFBgf/xAA+EAACAQMDAwICCAUDAwUAAwAAAQIDBBESITEFQVETYSJxBhQyQoGRobEjUmLB0RUz4RYk8UNTcpLwB4LC/8QAGgEBAQEBAQEBAAAAAAAAAAAAAAECAwQFBv/EACoRAQEAAgICAwABBQABBQAAAAABAhEDIRIxBEFRYQUTIjJxQhQjkaGx/9oADAMBAAIRAxEAPwD8jABUASQAwCewAgkAASiABOPwIxgsmMeN0ATLLDXsUx4IyFXcccbomM2vf2KqXkthPjkDWElIs2c+8X4ZpGontLnyRdtca1jlmc6c4b7o0pvE0b14v4Yy4e6ZNta3GVGvCdJ06q+PtJ9/mdVpYupJSVVKTeIxZEbGnVt4NrTNrlGFGvU6fXUaizBPO37onv0ute1bq2nSvK1OacZRer8C9K5cEqVdaqT7f4On61Gpd060nqjhwb8rsSlQua9XMHGLjsvDG/01+Ou3rztYp5de3xtJbyiv7opewpXE9UGpao5Ul3PLpXVSzq4i3KPeLOyNSE/4tBLnMocf+GT011Xn1aChrym2/stdn7nPg9K5nGrKGjh92t8nJVp53Wz/AHNyudjAlLKyuSYpOLzyVTaexpldvK35Ky4NNSqezKSWERVUWzsVRL2CIwSlsME9gKtAkARgEkFDBK2BbGQIIJABENAlMCYPDJbeSrZblAS8tFoJvYhfZLwkRVlFpnTbx+L4uO5zueX7nRb3MqE86YzWMYkSrJNup0pQnHK2e6fZno2r0x/E8qlcTnFQlLMVLKXg9CjL+GuftGa6TX0+4+jtJuhTqY2TcsmXX/o5WsHc9SsM1IVtTqU/vRUudPlf5Nfo7e0pWVOhByp1OHDlSflN/sfUTuKdWhpk1GEo7Sclg4+q7e4/KJwtn0WlVhSqRuFWcZzTzGUMZW3n/ByUaVW8zG3bcY74k8YPtPpLbQtVb3NGPp+pKUZVdOYxn2ylt5/M+YtKztepS9NwzLDjp4bWcPf8zcrHjHDRoN1f42tOONl2Po+uu1s3Sh0elOhTlTfrVHvKe3Gp+V4PKpVIVrmbk/iknnfO/Y+voWtsun2tStb16spQVRasJU1y3LPZEt7WST0j6O9FXT7OheXLUbi6y3lcLlROD6Q0HG+uJSwnjVh/JH1d9dU4dLpzor6y9cYxalhJ+77I+H+k3WJ39WnB1HKVNYm4tac90scok7u19PnbjGpnFUi1h+d0bVp7+5zzlhnWOWSuJYk0tvJlJvB0JzdGUnmNLdZa2b8fMwlLKwsGmGbjJJFNLybJuXL4Jik034BJtzyi5TSI0uU/ZHQliLk+4jKMKEm+WNmnJUTk9jPS8m0e7KuW7KyrgS2SRKe+WQ92ETF7GtDGvMjEtF7+wWLVJepN+DBrc1UdmZtbgpjYjBJPBWUYIJyFuAe5HBL2IxsBHJoto+5QvjYKzaGCd28JbnZbUEvjb3ISbUoW+HqqLfx4OypVjThqk/w8mdSoqeMLVJ8RRk6qoy1y/iXD4XaBlv00cI016939r7lH/JnGlVvnOrKaWNty1vbSrzdW4bfs+52fDTg1BJLOcLyTel1ty06saFX0E1pSy2+7IhWpq6qSa17fDjjJRQdeTcktuWzBzVNYgXSWr16s5LTN4iuy7nM25P8AsWSc3lv8SzxFbclZ9pjS0x1N4f7FW8vTHdj4prxHyS5KK0w/+3kojSqcucv2IlJyeWyBnARJUZyWitgK4yTjDLrgowIYwSAIwTjIGrwBD2eAl5IyMgWcuyKgAQCQERgEgCBgkAQSgEBCJAAAAAAAAAAAAASmQAL7P2fkhrHP5lcllIKhr8RnBfClvHZ+CrXnZgSpZWGQ1j5ENNEZYF41XH3Xg64XacXHSnlY37HCE8PKFm1lse5RuKUYRjGXzjIi8hCpSTeH3TPKjUzs+TTXJxUc7LhGPHvbp5biilKjJPlfud9re06UZLGlyzu9/wADjk8pJmTg0vh/Itm2N69OqpSjLM3zj8ytScqU41IYimuEZ0auFplx4ZtNRnRwu3AX36a0/wDu05UWoV4rLi3yjnllSaktL8PY0co0qUZQjiSxhryRVrOtiFeKjU+7NdyRawlDO65/czwapvVpns/JE4qS8M2xpknhkyeSUuz2aKsIIs+CFyWktgKplsZRQut0CISDW5LeGMZQEdipZ7FSiS0diIomS2Io1hlWWTysFXyVAABAtDnBUlAacJomJGchbMjS6xk1TMluy/AHRQ3mezSoqEYxUlKWrdxlmJ4lvLTUR69vUThHCec7vOzM1vF9N0erV9OVvThCTl/Ehr+40t/zXY9y2uqcunVKUY6ZyTi4PZxljjP7HkfRyNGpPVKm3UhunF/F815PXvqKivrdOnGSl/uxg/tRX3l7rwcb7eielrzqtF0p0acfTVxhy9RrQoJLnPc+YdK0p9duqtGlCVrGTVGK3i33a84+I6Op9EuOo1qFtTqU6dGSc3VqT+088RX5fLJ87Vq14V6UaadKVPMaeY8Jbd+O4npPT1bWtS6f1qr/AA0qkpJ0XOPGedj6Snd1Pq9NSuZVa8V8epY0/wD7B8ZCpddToSoOrqqWsHUpqSb1JcrJ7/TI1brpHqVptua0qUZbtds+/Io73dq+oVKKq03GG9SpP7Ckvu++T4+7gnUefhcnu1ukfX0Z06HwQjCOIuMI5y/wXC+fJ8lfzcqspacZ7FhXl3VNxjGprhJTztGWWsPuuxxVDpnOLTymn2aOWby2dI45Xakpy0adT05zjO2TPJeRm+TbktrW+OPcKTK4LRaSCxMpbJN7FZtPOnLSKvcrxlEXaY7iokI8kyWwGWCeBgLncrI/hXuyE9sBvLyyANYPMWZvktB4iyreQqCCeSCoYyyVsQtmXis5z+AFcBlm0kU92QTFZeWWeXtEqk2zeEARNKmopeXyzVyaemn9ru+0Sr2T3xFcyM251IaaUWoZw2u5GvSYpzq+lQeqcuZs1oUHbzdSssOPCfctZ2/o3FKUn8Tz+xW5bpVqkM6oOWcPcn8L/LS9q4jSlRnJKWXhGHr4Tlw328szuKym4/Dp0rCWTDLkxIXLto6stGnOF7dytOm6knjhbtvsRhLPctSc23GDwnyaZ9+yfwTcU8siVLQlKo9393uaSnCjLMPjqd2zCUnOTcm233ZCrTqOe3EfCK8EZIyVEtkckqOfZF0kuAKqPklFiGAbKhshsCSMkABkMAIAAAAAAAAAAAAAAAAgkAAAAAAAAAAAAAAAAAAng0U01hmYCtHt8vcq4Z4/IhSa+RdYfGzAzaBo8PaSw/JWUGvkBUvCo488GYA6cqSygYKWODWE88kXaZRUvmRCbpvD4L7IhpSQVfWpLSkscl7vFSnGfC8I5nFx+RKllYfHgaNkZtLRUWV7lmtK8x8lpZqNY2S3z4KSko4lHeL5TAlxUluZuLXJq8OGYbr9UQmn7oqWMuGWbyTOGN1wNnBYW/kIzfJeHBVl44wCIlyiy4Kvkt2CqvkglrcRg5tKKbb7IqJixJ7HbS6NeVIavT0r3ZnU6dcQeHBmdxrwy16cie5L3JnTlTeJLDKpmmQABAkJZaRacdMmucAQnuWTKkoDotY06txGFWsqMHn42spbFE9zPBZPHcml26KUlqR6drNKm8nj03iaPVsoylS14elS5xsStYvoul1ZRqUnFtbPddtz6CV3NZnnVq+0n95e/v7nzXTqsVVprxk9u8qQoQS0ynGclFKCy9zjZ29GN6cXWrypd0YxrwmoUfijKUtMksY3+eFz4PDt4QoqNepXcquMJSw8fqR1vqlS+UKMKnw5c54XMm+/yR5EZKrS3cUuMY/uWTpm5Tb36coTqp0a6p66eioo088vfvwfQWklC1p2kYtxoY+3HGr3PzyUfTllPGO8dj7HpHWPU6JUpVKWqvReYSSbc3/42Ys0sy29yvUdb6pN6cqWnEYqK7+D5Lqm1zPtsz6ONzFwpTpxeFNS0uOMex83e1Y3fUlCT9OMn8Un2Xd/l2EMr08avHSo+5yyeGdt9KE6050FP6vqehyeXjtk4Kn2tjrHCqylhlWyZQcX8Sw3vuUyVlaRGcIh8FWwJT3HLKjIVrBZkJc4KKeCNbYNpk8FSG8jIRHcnBGRkolbIgnJAAh8k5I7gCU8EMlbMBjG75JUXJ+xKjl5ZrHCTIulUlE0TSi2/wDyZ6sSTfBdJycZNfDlbEWEISuq0aecJ/obfFZUFOnNNzk0012RWrJUrmThHS1tgxq1fUilN7ReUQ9Np3HqNSg2nF9zllUw3h5fllJTctlsvCIS/E1pLUpOW7/MnKWyEnnbuTCpGmntmXbIGiopR1VXpXjuUnV1bRWmK7LuZyqSnLMnlkZBb+DIJw2W08NceX3CKJFlHHPJbaK2/MIACG8ENgTq8blW/JDf4AA9yAAiQAAAAAAAAAAAAAAAQSAAAAEEkEgAAAAAAAAAAAAAAAAAAAAAFlPs90Xi9vh3XhmQTwF20cVLjZ+DNxa5Lqaf2vzLOW2+68gYhbGjhneJRoC8ank1Tzl9jmLRk48AldBEYKUhComuN/cvGUU98kaUy6Sa+0mTRcGnGpvF7p+C84rLXYxlTxvH8gVtbQ1xlGKcpRTkku6XP6FJQa+LGnO69yKcks74eDSVOpUpQS5jnHuRfcZqWrbh+CGsccBxkktS+TJzjkrLJ4JWMF3HPBTOCoPGScojLYyVGtCkq1VRTxnufR2lC0sIKcsOXlnzVJuLyuTWVSUvtSb+bOeUteni1j3Z2+pfWLeWUmsHn3XVqcc6FlniNmcmZnHHXLnumleu69RyaXyMiAdnit3dpBBOQh3JzuR8ycgSERkAXLwai91lHVbWttdW8Gq06NVZUtUdUG8/mtsFbvptzZJSqQ1U3xUg8xf4k2umWU38PHyPoOkR02M1KeHLeMX3Z85BNyxz8j2LTqd1Yzi9UYLGyUNS/EmUbxunq2tGp9ejKnCTyn8KWcHsX/VLeVvQjCslXoVYyUZrD25PLtrmpf05XE3TguIThJwcpey/8Hq3/SKkrTT1pQeIfDfUd/Tf8s/88fI5ffbr9dPlet3NO5vncU6XpuW84x4cvJw2lldXNxRo0KT1Vntq427/ACNPqlf659Uj8VRy0aecv/B9VYdKo9Icpep6lx9mVTwvCNWyRiTdfGXdGpQu6tGoviozcZafKZ9X0e+p2fQqFOGiVxOqqmE8vnj8kef9IenTuLmV9bR1qazUiuc8ZRwdIpepV9WpT1UaPxTzsn7ZHubJuXT7CtfO7bkqThNbNKWrVvsfN9RoKNXVOaalvhLOEezO/q3lbmVZuOIU6ccRhtxhfkeP1OnWVROrOnSkk805VFqXtgmO2srNOGhVowVShcJ+hLOZQXxR8frg8uTae/J1TSbeMnJP7R1jjVWyvc6bOxuL+uqNtTdSb7Lhe7fYvf2qttMY4bh8E5ReU5eRtHI2QHwQUGyGySoRIyAgJIJIYVADGQi2duAn7FcjIF214K534K5JScmRUv4uxrTjhZwIU0uTdYe2CWtSMVFZy+Ck5rOy2NKy2a/Yh0k4RYLEwpKSUm+XwXrNP4Vslu2ZuelY8GLm5MCXJQ2W7Kby3YaxyTnCKyjGA34/MhyKt5AnPggAILcukvGWQltuTnbC4CrYS3e7IcssjtvsiHLwBL25K6vwQyQAbA7AIgEgAAQBIIJAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQBIAAAAAAAAAAAAAAAIJAAtr4xsyU1LaWz8mZKeGFWlBr3RUvF74/cmpoUIpRal3fkDNMvGp5MwB1KeSTnjLCZdTzx+Q0u1pRzxyTCvOnheOxKeSGs/Mi/8AF5y1WzzypbZI0aYRcnlPl+Ck25NOSy+7L606UoLv2YENafstNexWS1IODg0/JPfw/ARlwwXaz8yj2NI1hwWMlUY1tk06TOSLzl2MyckMrGWWwABkAJAJDBOAALKOEpST0vK2KnVGS+pTUVvKcU9lvhAjo6ZDXRqb8SXPyPTtq87eU4SxKjODUqcllSPM6XPRKSnOMaUuU5LKfZ4O+f2MqWpexitxw39CnTnGpbwdOPDim3h/ieh0zRXpKctktt++DDHq5jjVnbBs7Gt0+UYV/hqxjp0KWdC5x4yZt1NOvFx3PL109qlC2VtUUqSVLnOrB6Np1y2VlO3q0ldU6unVqqaZ4XZ+UfLVbmfpQpbbZe75KwqTynPd8Js4yX3t9WzismPi9KhcK26zK5jQpRjxogto/JHVO4U5Sys6t2eLKrL1ZPVnc66NSVSkpx3jjdpmpu+3j+Rhhh3g6frDjJpb/Fnd8HFa2cKnWYzuElaznqmtWN/BhO6dN/1Mx1es3qi5ya+HD3i/J0k128nu6fpNO96R062/7WVPW1hQpfaZ8JfW0aupSrcSk90ts+WUjXq06UKbynGWcNbomvcTlTb+2vEuDncrvp9Hi4OPw3n7ctHpcZJ+pWwktlBZz+ZwdVtqdC4gqEZqMoL7TzvlnrUeoemszor4Uvs7foc11UlUdXXJNSmtl4NTLLy7Z5ODh/t7w9tqfUHTsXQt4KlThFRSjy33k33Zxzip0pwSTUlyzVRgqWtceCs28tRi2dXzXjSjpbT5WxXgvUeaknzvyRg0yqQWaKhAAAAAFCpYhoCAC0I/EshExg2axSivcZxsiG1H3ZlpbZLMiYKU8NPCyUjD1E5SePYs5enBLgKtL/cltsZzqvSVlVcv8lcbbsaNq7zZKxEsuNiknuVEN5ZDZDeSMhkYAAAJE7IBjJOccfmyucjIVOfxZDeQAgAABBIAAAAQSAIJAAAAAAAAAAAAAAAAAAgkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAkASpeSCAL4T4K4wETnPIVAzgYARdT8m0Pi7nKXjNx4Cyuj03jLf5FZrDTT3IjNaXlkSecNEVvO4hVhTjKGmUNs+SlT+LNYWH7GeMkqThNSGjaM5XuR8zZRdSjOWY7PsYx4ELENEFiDTKCSCQAAAHsdO6RbXlP1KnUIQXeGh5T8bnjnX0+r6dwoviexL66WPo/8Aprp9ShH0LipKTX2lJPH4Hh33S6llOayqlODw5Lt80ejSqToT1QbUvKJubxVaTjNZnvlp/aT5TRieUrXVfPs6+nRoVpVKFxVdL1EvTnjKUvdHNWp+nUcVuuzfg9/on0etuquMo3FWE4qMpRnTTj77pmrZIzJ2459Kjbwq+pcRlUWNCgsqS758GEINZ01En+R7nWeiXnS6cvq1xGpRlzGfwz38J8nJR6ZTdCKlNOot5Si/0Meck7r08fx8+W3xnpjYS13CjJ7rx3N7i7U7iacnJ6uX3Jq2dK0iq1OLS7PWZqzUo+p6j+J5aayZuWN7ejDg5uO+OLGvPOM5w+Gu5NOqoR1L7S8pFaleM4ypVYYnCWFjujm9enSbT3fsXW4tzmOW9tZXTitK+0+7NKbuKdOVSgp6U/i0v+xwwlOvVlKK+JJtex61rb06NPMVqqS5nLfHsi2zGOUwz58uvT0alKzuPTk4SnpW7hLaXzZ59SjOzdRqP8FvMZJ52Hryi3TcpY8Q2wXgqNahOMnKE0vhqqWWzluz29n9rH/wmqj6x9beYxzUS28GlCtTmnRqNU5NrEm9jyKV7Ws62qKi5Re+eGb3PUfrkFmhSjL+eK3NXD89OWPyJrdvf/66rmMYSkqbVR4w3DdfmcEm28R5eFuVhWqaNGtqC7Z2Ma1bE9K3Xf3N4zTjy8ks29Gdpc0paK1OdJZ+1OOxalYqtOE69dvbLhGWF+PsRDrVednC3unOtRhLVCL3eMY/FJlZ1s13KMnKLisaeTOVy9NcGHFryy7OoWcqs4uk6Tx42OK5s1b28JuqpVG8Sit1HxudcLrEWuE+7L64aVNJJz5TWxMblj07cnHxctuX3W1p9HKMFQqdWvqVrGqlJUk8zcXw/bJr1HpnQKNFStrmvJqTTa3z+aPOqap1HiT2eM5FSEqkYrXt7nWS3vb5ucmN08ypFRliOce5Q3uoqNdpNNJdjBm3IBAAMgkASti6ZRIsv0JViXLHBZLDyyHBN5TwkMuey2RGk60otIrLMn8TJUcEdwKtYQ4W5MppIycmyomU+yKcgBAABAkgAGwAAAAAAAAABBIAAgkAAQSABBIAAdgAAAAAAAAAAAAACCSCQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQBJBJAEgEATkAgCQAAyXT2KEoK0TJM8kqQF1lZwyE8JruEw1lBUrGn3KtYGdkidWVgIgDAKgAjSnQq1cenTnLfHwxb3AzRZbPK2ZtOzuKeddGpHHmL2MklnGUvmwOyj1Fwg41Kev+rVhnVa2l51OFSrb28vShzNvC+WfJxUKFGcZKVSLeNtMhT6hd2sfRhXn6cX9hv4V+Bm/ws/l6br0+n3Lo1vTr5p6HU0J+k/bPPg1X/Y0fXs5yp1IpP1ae2d/2PEU3W1S5ly8s77eUVbQgq8pZmnKDh8Mc8bmMsXbDOassK15d3txGpXrzqOXw6pvZHXGn9WpOcqm3MY55XY5brT6+mC+CO0UuyMo15Sl6VTLi46XnlGbN+np4M/CdvSoXEKuiE5xSb3bbzj5G9KpSdR0JcptJvwePOjRhhRnJ+yWCdU1JST+NvhckuO3bHnywvbe4tY3nUm1OUYxhjMe7X/k47uxVBLDcm2/iOyjWjazl67xKMdl5bOerXd5JRgm23sjWO5/xw5rhZbf9qmycI2kkk/Ucvib8GquZQlFY2WU0+5nTsbuNRKNCcs8qKz+x2f6VdzlGPoNN9nJIWRMOXKYzXWnG6sksNtJkQrOUXh8vg9X/p/qUqcnHp2N9pKqn+hw3PT7y1/37arTS7uG35jprzt+3HWt51YN04Sm48qKzsZVqNa2loqRlB4Tw+2T7f6J2k4UpVWt626lqWyH0g+j0ZwrXt1f0KEP6o/pnuyy/Tz5zvcfB+rLeOce56tDo0b/AKUriym5XFL/AHaLeX8zy6/oxrr0ZSnFLdtcs6bfqlxY11VtVCmvCXK8M3Z+OW/0hUWmlDDbjHTj9ysf4Uu8WnsyLq6hXqzqUqcoSb1Zb/8A2TGncOOVUWpZz8ieLtjy99uyr6U4pwjFyxu12M9TTWXx5M4TUt05Y9zacJ1KcpRjGSgsteSenW25dxvTi01q7+RVqRoxbknjwcNJxnFJbS+ZSpqhJxnybjx27Z1Ja5yljGXnBQ9u16RbTt1VvLl0nJZjCOMnnXdrToSbpVo1Ids7S/ISypZY5CSAVAlIJF0t9hVkMYwG+yGSEtzKrJPGGXhEhbETqqK25DROWGYyn4Kyk5MgrNqGwSAygAgCQAAAAAAAAAAAAAAAAQSAAAAAAAAAAAAAYAAAAAO4AAAAAABBIAAAAAAAAAAAQSAAAAAAAQSAAAAEAkAAAAIJAAAAAAA7gAAAACJIGQJBGQBZPBOrJXIKqzIGRkgnIILLgAd3S7+dhdwmpzVJv+JGL+0vkcS4BfaS6r7a8rUbmhFW1aPqT+KEs91+x4Ver9Yh6dw9STf3U8fI8qlXnRknFtY8HZbxldRm4RlpjzlpJGJjpu5bZ3vTJ2jTypKSUklJN4/A48N+/wAz0ouWZQhFSWM6orOPmc2r06mqUU87P3NMsfSmpKOluT4S3yei7OFCk6Uqn/dpapxctopfd92dnSalJVXVw9cHiC43857HLfZubpyo0oJy7wb/AHZm5d6dMeO3HyYSnUlFS1aZd2iyam/i3ku5VqtCa9ZOce+Gn+xte21GOmdrU1U2lnPKY6rUuWP0pCEpT040pd2bwUKU04NPHcxtaHqt0oL1Jya4f2T3bTpDofw4uVOct5SgsP8AAldMc79MrTpVW+k/Xjopy8r4n/g9q06PRtqWj00t+VHL/Urbwla3EI1rmUo1HiE2kt/5ZL9mexO7hRhvJbENd7qlKyi4Ll48siFKnOo4/HSqw7Rlh48+6OVfSGhCrplLZrZnmXfXIyrr0niaeYPw/HyZNG4+lV7O2TVw9cEvtJbr5r/BSv1W2UNUZxqJ9vK9j42r9Lq9ZOFWhGnJZxUi+PwPIl1CrKtKaxFT3cY8MsxrNzj627uLKrHVGjRU4yym4bP5iMqHULadKh6dGbW8Gk0j493lTU3q2fZiF7UpVY1IvEo/qa8WfOI6l0u56fWkq9LCb2lH7LOetPEtGPsrT8z6H/XFVpqlcx9Sk+77Hj9StFRqetReqjPvn7LLP5c8pr046TxJe+x3xs4uVOVTdSfCXY4qEVOTUtlzlHuyrUaNG1+JPdJruZztnUej43Hjlu5fSfRt6clGnCGnG7bN6cI+luksS+F45yYYjLVUjJJPbBySutGmMXF6ZasLhHLx2+h/cxwu7FqnS4urKVNum48vsjguY06NwnKs67TTcdOMrxk3r3FScnVqSbXhcIw6i6VWqq1BqUZxWV3TXlHXHf3Xg57x/wDhiyu76td3Eq02ot8RisJLwc7bcsvn3PpOn9ItKljC4dSVOrBZm5uOPyZ49zJ16k5t68cNrlG5Z9PLZftyYIGQVlaPJZLBRFs4I1E8shtRKueODNvLBtpKo2ZtkAIEkAIAAAAAIBIAgkAAAAAAAAAAAAAAAdgAAAAAEEgCCQAIJAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAD1ra2pdVpSw1TuoLfxNeTz7i2q21V06sXGS/UWtzO0uYVocxfHld0fa1LW2v7aMpQVSnNaovwcMs7x3+H0+Hhx+VhddZT/7fCYB7l90CdJOdtJzj/JLn8+54s4OMmpJprs+x1xzmXp4+Xgz4brOKgEGnBIAAEEgAAAAAAAACCSCQAAAAABkkgFFsklSQLLYkqb0Iww3N4fYCPSzwjW3q1ZU5UacZNJ5aRjWn2T29ha1pUK8ZweH+5Lv6ax8fKeXp6fSqV9Rv6dS3pTW+JNrC0vk6er2jddSlHLly0/BvadepRi00qdThNrKTOfqPX6lzb07abhVjSlmNTCU/GNuUcJllcu49vLw8WOHlhlty0Zxoxnp2WcFnWbpPTp+TXY417bolSlFYxlG7Ns453GaiJfHJ5Wn5HZZ0LapRrSuZyU4r+HFczfg48LZt8EqeiEquMT+zD2z3/I1pyt17fR9IUdLnphCo2klBbG3/UUKN1OnWWuOtxjOPZHzdO+nSxoeNKwvmZUbarXp1ZQi5Rhhya3wPH9Z8/qPa6h1iNeTj917Sx+jOOp1SvUoaZVG2u/k89RyiJJxafZ7GtMXKtJVpvuVU21u+N0Q1sQXSbTKeubcuX3Ixp+RXuWfGewEMqyYc6Xw+GSlmTXcIQl918PY1pV9MXCe9OW0omMlpZXO7C7TJSozcVL8fKNbW7lSnJOMajmtPxdjNaZJas7ePBVpRr/DNNJ7SM3trG3G7jbtgtGLk1jZZxlkqMV8Uk5R5xF8kSm6q1YUYriK7GXeSfbqq0M0ZYq05pLhHPbKhC3uKVxF65x+ColnDMnVlGO0ouPginP1HGKjl+4ks9ryXDPWppnVq1VTjScs04vKS4yRB1JpxT27tnXGzdWhUnConOPMGt2jlp1VFNS2Nzt5spYh289ajBa2+FFb/kVqUqlJL1KcoZ41RaPe6era2i6larD1ZLjPC8Gl/wBUhG3/AIUozcfsxnHMc/iTd30amu3zSeEQ3ktNucnLbLedipWUMAgIAAAQTggACQAAIAkAgCQAAAwAAAAAAAAAAAAAEASQCQAAAAEASAAAAAAEASAAABAEgBAQSAAAAAAAAAAAAAAAAAAAAA7rHqNW1klCo4pfkcIJcZZqunHyZceXli+wt+r07hKFZKEn3+6zO96bRuMqccPtJco+Xp15U1jmPhnr2PWVGKpXDen7sn2PNlxXHvF9fj+ZhzTw5XlXVB29xOk2paXyu5idPUHm/rPOfi2ZzHpx7j5HJJM7IAArmEEkASAAAAAAAAQCQAIAEgG9GhGqm/VjFrs0xbprHG5XUYA0q05Uqjg8PG+U9mZhLLLqhJBJUSjRRk2oxTb8IyPVtqVH0Y1EnKUlv7AcqsKrhqm4w9nyVdrOljVFxb2UpbJHt29rVruMktSj9lv+5PV6FGjYem2pV9Skn3f/AAZ21p5denWuZxX8DVjGuntr/wCTmlZ1qf8AuR0rOM5IVeUZ6c5S/Q1neynDTJJ+7W5UVhFwW72Jz77Er4o5X6kKD3I3NjrqC2WWRVqKpF4+7L+xSpTabfKKU3z7liZW/bWKykff/Qayj9Qua0op+pU07+Ev+T4SjH4c9lg/UvozSVv0O2jjDlHW/m9yZelw9vi/pJ0V9O6jUlRg/q8lrWPu5f8An9zwprVBrufrl5bRr3tspRUlOFSMk+6wv+D84690uPTuo1adu3Kh9pd9H9Lft/dDGmWP3Hkp5S90AvsxzyngdzTmpjlhTTjun80XxhGChKUlGCzJvGF3CxVz35NITzVT8o+v6Z020o0fR/hOpBJznKmpOUnzz2R4nW+l/UriNxRUVRlPS0vuv/DMTOW6dsuHLHHbzqhlgvN5wVzybcVqTxNe5p6Kqa5Y054yYJ4aZ0xnKpQnRjnLecLuRfpzxeiqsvZPsdXw0J7Ykn2NYdOVKi6tZ5fZHPWcEt9/GCXtvHeMUrRTepQUURu2uOMbFYz1YUpPBopJJ7rC7hZZR3FWk04Taku5i608TWVibzJY5I3nLjd9kddCliDXpucnznsX0xld1SlXpKCc3h+EUq1ZXM1CnGT32S5Za5t5acqi17or065Vpd06k1mKe+/AZddL6P31aClohBviM5YZV9B6jFtTtajSWVKOGj6J/SS3p7Wtu5vtOWx4/UfpJe3EJUlUjTTTi1TWP1MS5VuzCfbxKkJUqkoSWJReGigBtzD6LpP0bVzCNa8nKEJLKpx2bXuz55NJptZXg77nrV5cRcPUdOm/uw22+ZjOZXrF34bx43efa/XHZ07z6vY0oxp0lpc023KXfc8skg1Jqac88vLK1IIJKwAAAQSABZItSpTq1I06cXKcuEj6Wx6bRsKaqV9Mq38z4j8jjy804/8Ar2fF+Jn8i9dT9ePbdIuLhapL0oeZ8/kdFXpNCjB6pzlLzsj0rjqMVlQWfdnmVq8qrbkzljnyZXd6fRy+P8Xhx1P8q8uvTjSquMZZXlmZ03TXwrCzu8nMerH0+NySTKyAAK5gAAAACASAIBJAEgACCQAAAAAAAAAAAAgkAACCQAAAAAAAAAAAAAAAAAAAAAAAAAAAtJ5Udt8c+SpMpOTWXnCwQFAAEAABBPcAAAQBIIJAEEgAAAIJTaeVsAFS5uWMvOCAEm3hbsHsR223S7y7gp0becoPiXCOI9XonVJ2VyqUqmmhU2ab2i/IpHXS+itacE6txCE391LVg561jc9JqxlUxWoLvF7fkfSTuPTpNZcm+HHk8246vWSUatOnUqU/v1I//wCeDEtrVkcK+kc4wahQSfbfg8ytXrVpylOblOpu37CMKbus1Yy9JSzKNPnHselcztaydza2UbShD4U5TcnI16T28eMdvASwzWb1ZcFz47FUnGag+TSLKrLxk1VRfJ+5CSS8FJRy9iaamdizy4VJdlF/4Oen4N5NulNY8ZMIrcqO+3aqU1Rin6lSpFfM/U7OcaVCMcpRhHGW+Ej826JSVTqkHJYjCWce6PrL61r3kYQVR/V1vOmvvPtn2OeV7duPHca1+uSvr+Tt6zpWVOPpurHZz7vS+y2W/scF/wBZsKfpQi4yUW04pfda3X7HJTsal51GlYzqTp0tLlPS8ZLV+h29LTbQg3Wg260+0Uvfvnt8xj433WspljZ4zb56sqc69SVGDhScm4xb4Ri/9z8DuuoaaNu8YcqK/dnC/wDdXyOk9OGU7Gtj1vo/Z069d1ajeIPjHs9/zPKZ7n0YqabitT/nh+3/AJM5el4/9nppfVL6cZQ1JpS1QnznycH0muIRs3BPPq1FpzzsejTsqFO/uK9aq5SlFOFNbKK43f8A+7nyHWL/AOv9QlKH+1D4Ka9vJyxm69PJnrHTmluyi5Zd92Ujwd3jPJ0W9WVKopw3kvJzG9tLTXpvjEl3JSL3NzXuJ4qSb8LGDN0sQblJLwjtvq6rVW4Rj8L0ua5kzma1NQ4xvkT0uu9MoY4xv5JcJVNSysRWRKTjLGEmu6RpF04xc98/y55GzxYUtpp5x7o6lNasvdfM5k22WUU/YExtdruNFBY+KXZS7Hn1KlJ0nCENMnLMnnZrwb+j6tFJTerPjOCJdNqRTfqQx5JuN/28r6jjUnH7La+TIe7BBXJIAAgkACCQO4EEkEgAQSAOm3tXcRbVSnBJ4eqW/wCRyk5aeVyK1jZL3Nvet3QsYfwW6lVreo1+xnVuJ1HmUm/meVC4qR+9leGX+uSx9hZOM4pvf2+h/wCs3jMJ1Px2uRnKSSbbOOVxUl3wvYyy2bmLhl8j8Xqz9Sba47FADby27u6AAIAAB3AAEEgAAAABBIAgkgCR3AAdwAAAAAAACCSAJACAhEgAAAAAAAAAAMN8Bxa5QAAAAAAAAAAAAAAAAAAAAB3AgEgB3IAA1pU41G4uahL7urh/j2IqUalJ/wASEo/NGZ6VGtOVCOW2sYeTOVsd+PDHPq9V5wOutQjLeC0y8dmcjTTaawyy7Yz47hewAFcwEEgDqt683BR9SS0cYfBylqb0zTA9FaKyca0NeeJY+JHHcWsqDeU9Plo3pqdScYwxl932Nr2jOMo0XVdWb3aS7BXNb3tSgkozkkvfJpVup1U5VHmXnyY1abppZjhPgmjRndVo06e85PCQGeuWpv7OecHRK4qVKVOhFvTHiMT0q3SKfS4Sq13G4ko7ReyT/uU6P0+rVrfW6jcNPxQa2y/PyJvpdX082DdOTlFtN84fJRwcpuSZ6/UrulWpSpXFnTp3kJJxr0tlUX9SPMUWlkSrMd1WTlhJ/mW1be5E+xGPcuy4/iKiaWX3RSC2+ZrUTdJ/0mUN5RRUd9vUlaPXBfFnZrk+66FVqXfT4Vq0NEnlY+TPj7Czq391St6GNc3jVLiK8v2P0mVjR6fThRt68KsIpJY5OWWnpx36jgrWynNSS0tctdznvKSo2Vb044nJaYpd5S2/ueqo5PNv6VxfSjStkoU9813wuz0ru+3jkw3u60+O6lDVSo1IL+FShGjnzLGp/ujymm5qS4TPqPpFb07PptvbUliNOtJLPL+HOX+Z41naevZ9Qqf+zTjJfPUv8M6y9PPlj/k4G9/wPT6DPRdtrlL90zypHf0WWL7HlI1l6Zw/2jW4ur67rVqNKtJRqt01FPGVn7L9meLVoulc6Gmmucnv9RoVLScKlulpk1HblP5nhy+KvUk3nG2fJjFrkVltEqtkkWlu/kV7vPJ0ckLdkxfxNER5ZGcSYHXBNxi1xkvCKi3N/FJ/oZKo4qON4vlG1OLlUSytL3yzNdsNIlSU5amVo04RruFR8r4X7m22Gnx7nPVp7488MjeUk7aV4xlP7Ol9zJU5PYpUrtzSayorDfk1jNY+Fp+w9M7mVbxpujlRllN8ozvLjTQdNN5l29iYV2lhJfizGs3VrJvDWOcCRrLLWOo4wejoinmnDCKfV6dVvUnGXlGnm04AXnB05OMuUVCAAAAAAAAHYAAQCQBAJAAgkACCSAJAAAAAAAAAAAAAAAAA7AAAAAIJAAAAAAAAAAAACCQAAAAAAAAN7ZJywen/AKa61LKR48JuMk0fUdKvIVKSjLkzldR145Mrqvmq9vOhNxkvxMj7C/sad1BtLc8Gp0mpFvHAmcrWfDlPTzQdU7CrHtkylb1I8xNbcrjZ7ZAlxa5WCAyAAAAAAAAAAACCQIJBAEgACDvsk6lOUUm3HsjhNKNadvUVSlJxku5MpudOvFnMM9309uhY68Sq/CuyK9R6bGdLXSWKkVx/Mjnj1l1WvXjuu8eDpj1e304lJ/8A1Z5LOSXb7eOfxOTC4b9/rwCTa5lSnXlKjlRe+Gu5geyXc2+Dlj45Wb2kABkAAHd0+hO4rQUnKNLO8kj6+3sKNtTlFR1uf2pS5Z8PO5rTwnUkkuFHZI6J9VvqtH053VRx4xkzZa3LI9Dq19Tq1PSt4pRWzk1+xXpFxRtHWr1k8QSxhZ3fY86pcepGmpPeENJm6spQUMvSnnBrX0m+9vT6x1NXz/h/YznL7nJZ9WurJaack44xpkspHK+MIiMdUsDX0m7vbpld3FxFRqVG4x7GqS0ZcsPwc0V6ba5LpubwkvxY06Y5Se12k4Pcyy1LyWa4/sVTzPxkrGWW63nLNlLDWe5zUf8AcT8Ez+yxQ++RZdvU6ffVbG6VSk09sSi+JI+v6X1WHUtaptxlBrVHhr/KPhabxJN+Dt6bdVrSvKpQlpb5TWcnPLHb0YZ6mn6PJ04081XFR/qMaVw7madCOaEdnOSaz/8AHz8+Dwul3H1lxlVmpVW92/u/I+mppKCS4RzdrJI+S+l7/i0If/Kf6JHP0O3X/T/VKz5qJpfhH/k6fpHRle9ft7Wny6cU34Tbbf5GVOtCy6Nf2ieJQqVIY/b9De+tOOv8rXymc5Z29Ei5dQj4SZwy2TS8s9Loz9C4lLS6k3T2gvL8vsdL6cMf9np9UqRp2snLlcfPsfNxp6Y7nbf3NW5uGnKLhB/d4ycksfNLuxjNQ5Mt1i2k9tzN/aZfZtvGxVLKyaYFsiho9kUUQNqOJrSzbKhScZ5ymsJdzmpvTKL9zsm1mlmCmlP7LeM5CyqSTnST0vHy4MG2mlnZdj2KU6VtSU6cVOL2cXvjfdM825xKtOcIKMW9kuxFrllvItjG/DK4xLJZ8FZQ5vHJNKbw3jOhZIxjghylGMktlJYeO5DbqpX0I4UqbXuhO/jhqFPL8vY4QDbStXdaSbSWPBkSAIBpCn6jwpQT/qeDafT7qENboylD+aPxL9CbjUwys3I5QSQVhIIJAAgkAAQBIAAAAAAAAAAAACCQAAAAAAAAAAASAA2jQlJZM505Q5Q2141UEEhk7gAAAAAAAglEEgQiQAAAAAAAAABvbXMqE008GADUurt9HbdS1pKTPQpVYVOcM+Spza4Z2UbydNrc4ZYfj6fB8iTrJ9TG1pVVwhLpNOS4PLtOqbrLPaoXsZpbnC+WL6OOPDyvJueix3xE8e46VKm3pTPspSU+GYVKEZ8oTms9mf8ATcM5vF8LOjKD3TMz6656XGom0jxLrpkqbeEenDlxyfI5/g8nE8wFp05QeGsFDq8NmvaQAEAABBIAAAAAQSAAAAgkgAT3AAnDxlp48kHRRvJUaFWi4xnTqLGJdn5RzhQABAlLYgsk2tkBUvTWWxGDlLHHzNoqKTS3aCtKdtracpKK8vuJW3ozzq1EuprllvH9i1XGhLXl9sdyOvjjrpjOLbyRHJpKLaz2fgzitVXR+pYxljo3XHBONTz3N1FLsUnFJ58lY0pKnKME3wzKDcZPw1g66k4zglHZ+DKnSdaEoR+3FOaXlLn9N/wI3rvpZPVJfI67XaLb5ZxQfjujqpPS0vYSLt6lnX9C4pVE8JSWfddz76k8xWD87s6NS6rQo0knObws8fj7H6V0ey6b062hG4r1L6olvKq3oXtGPj55Zyz078e9Pjb29p0OsX13OSctXo00ucRSTPClUne3U69ODwvjqRcvtKPdn7XTuOjzWHb2iT7OjD/BzXH0W+jfUVOSsaEJTWJStpem3+C2/Qk0ZZX1p+Gentqnw98eS8K84KooNpz2bXjwfoXW/wD+M3GnKr0m6nVa39KtjV+DXJ8NW6VdW7lGVKTcXh45T+R1mUrhcbO44m3/AFY+RSbqTSWhpfIvJSi8Sck/D2KvX7mmEaGl4KtaYk4l74KvHcCre3uyeEThZzyWUd1n8giqjybtqdJp+DKLzMsnp1r2wFUpNxksNpPsdCxqWrjO5hFaaiws43waYaWWsZBouqGjLX6dznXB1JyklqeUuERKnFrL/Qm2/C62wlhJEao+nKLhlvGJeCJcZLLDSK5sMA0qxxhozIAAAg1o3FW3lqpVJQ+TMyBZtqW43cayxVbkn8T3ZnwE2nnwb1FTq09cWlNcx8k9Na8u/tgQSQVzCSCQAAAdwAAAAAAAAAAAAAAAAAAAAAAADptaDqzWxjTg5ySR9D0uwxhtGc8tR34eO55aaUOnfw02jkvrHTF7H1lKglBLBx31snB7HjnLfJ9vP4mP9t8JOGiTRU7uoUfTqPY4T2y7j4GePjloABWAAAAAAAAEEgAAAAAAAAAAABMXhm8Xk58mkJErpjW6bT2Z1UL2dJrLyjkTySjNkvt6cc7jdx71t1JSwmz06V1GS5PkYtp5Wx00b2dPGXlHDPil9PpcHzbj1k+r1Rktjnr0Y1E8o82h1FSxud8LiNRcnnuFxfVw5uPlmq8i96enlpHi1reVNvY+wnBSRw3NnGaeUdePm11Xg+V/T5n/AJYPlwdt1ZSpNtLY4mmmeyWX0/PcnHlx3WQACuYAQBJBJAEggkAQSAAAAAAAAAAAKBvCpDjgwAGsuchTafG5RS2w+C7aWNK3A2eyXkjbGc7+BCWt4k9yJSUJYW7Mum/tqlsvBSLxV1Lgt6inBpLc3pWqrQSjnX8h67rVly6xTGOTGrhPfdnp07JpfGm/ZGrs6dShKLppRfEl9pPyYvLjHfH4XJlHk0qXqRypJPw2Yy10akZRbjKLymux7Vt0qhLm4lJ+I7HmX8dNZRis42LjnjldRjl+NycOMyz+1FLXNScIptcx2/Q1jnWjCDzFNco6YfFho6PP7fRfRyh/uVsb/YT/AH/sfSRk8bs8rocNNhTf82ZfqeskcMu69eNsxkSWjOcHmE3F+zKpE4Ibrvt+tXNDCqPXH3OTrFK16k3d0Uqdyl8S/n/5M2UlHYJXj1bGhcxxVpRn81weJf8AQHSTnat//CT5+TPq5QxLKM6sFKOcGpbGbJX57nDcZJxktmn2Ikso+g6z0tVk61FJVY8/1I+ccnF4e39jrLtwyx0pKU08YJSe7b3f6EuT75RV7rZlZSnh7LYvNfFkotl7I1dOpGhCq4tQqZ0vzgCZrRPbfPJMaib42M1LZEbphZlZ6byWlORkqmdnsRLhPsQ8R92TTVzt9M6qwn4Mk2joccowaw2iuY5tpp8FcbZ7AEAAAAABBIIAkAAQSCAJAAAAAAAAAAAAAAAAAAAAAAAAJSy8EHb0+2daqnjYW67axlyuo7+mdPziTWWfUWlsoJbGNjbKEVsenFKKwfP5eTdfo/h/GnHju+0pYRhcxTgzZs4ryuowe5yx9vdn1i+W6xBang8Q9Tqlxrm0jyz6WHp+V+TZeS6AAbeYAAAAAAAABBIAAAAAAAAAAAMBbAAawmbJ5OTODWEyWO2ObcFU8ljLtKnLW6Z0UL2dN7vKOchksl9tY55Y9x71vfxmludinGaPlVJxeYvDOy36hKGFJ7eThnw/cfS4P6hZ1m9evbRqJrGTxL3p7g3KKPXo3kZpbm04Rqx7HPHLLjvb18vDxfKx3Pb4+UXF4ZB6/ULDTmUUeQ04vDPbhlMpuPznPwZcOXjQAGnnAABBIAEEgAAAAAGQAGRkAME5Yy/IEYfhmsIbZZEYyaz2L87ZKIai+wUUnnGxrCKjzuVnNLIFJLfKIUHKSS3b7BT8l4U5zadNPbv4Ism7p6VDpde3SncU8Q5a7myuNVeHp0u+0UuTlh1CtJaas20lg6PrdOUNThmXZI898r7fXw/tSf8At3X/AH26Zzm4yqcSjzHOcovTuVWjBR/I831KdKLcZSlOS3S4RFvJwg5RzqeyTJcOnXHnvlJ/8ui3lKFy05KKXOe5yXiceoOLzjlfiatxuLiEZNJJfFLyTf01WnCpTqRUo/Dg1jdZOPLj58VmP1emdnGlTu8XGPTaz57HbKwaUqtrmcEsum/tL/J56qUq84Qk4UnB71Hnc9SldWtom1XlXl/LTjoX5suXlvcceKcdnhlrX79vpukx02FFPZ6FsejE+DuOs3dw0o1PSpriFLbH48nVafSO9oYVRqvHxPn8xrJLMfqvtCTx7P6RWlxhVG6M/E+PzPWjOM4KUWmnw0yM6GZuW5M56UzmhUc5t9glbuOUZyhszWPBEsYKw8m5WM5Pk+r26p1vVivhns17n1XUZaGsdzyOqUNdo13xkuN1Uym4+cxts9iNL7YIzjfsyU3J4TwdnndPTenVepX9O2g8at5y/lj3Z39bvKVarC1toqNvafBD3Oi2qLpHQpVI/DdXvwxfeMP/AN/Y8Oo0lhEjV6j07mwtlZxr2tZzejXOLXGyyl8tziqW86cVKUWoyWU+zPT6I7a6oVKF1LTj7MnLGDosZRha1LS4jFyoycGn3XYzvTXjL2+dk2lhoqt2e5/079YuJK3uKVOEd6nqv7C8+6K9R+jdz062VzGpC5tX/wCtS4XzXY15Ri415UUtJz1Us7Gik4to9nolWyoRrK8lKM54xhbYRb6STb50H0HXb+nojbW8MJreUuceDwsrwiQs0oC6a8RGr+lBFAXUlh5Sz4wRr/pQVUFtXsiNXsEQCc+wyBAAyAAAAAAAAAAAAAAAAAAAAAAAABMVqaR9N0i20xWx4lhQdWqnjbJ9hY0VTgjz8+epp9T+n/HvJl5X076MdEUaajLWkjGrdxgt2eKdv0PWLerVUIttnz3U71JNZLX/AFVJNJnzdzdSrSe+x6OLit7r5vzPlzGeMZVajqTbZQA9r87bu7AAEAAAAAAAAAQiQAAAAAAAAAAAAAAE8AgDSE8G8ZpnKWUsEsdMc9OoGMahopJkdZltJOAtySNLU6kqbymelbXyaSlyeYhwZyxmXt34ubLiu496U41Y45PFv7LS3OC2L0rmUHu9jq9aNWPk5TG8d3Hsz5MPk46y9vAawwdl3bqLco8HGemXcfF5MLhdUABXMAAAAAB3AAAAAAABrCoow0+lTbf3pLcxLAaeo3/wUb3CD5AvDGMyf4CVRPZLYyNKdPVv2KKnoWN0lD0pJZ7M58R8FHFRlmLwZs3NOnHncMtx11nSm2l8Ml+pnGTSxgp9pZfJeGeM4RNad7l5XaVJZ+LKXsXip3NRRisJcJcJFcI1g5Z9OnnEtm/JK1j/ACq2nrjT+zH7z7ltFHTpjOrlrdtLb5FKsdFZxfKSM28/MsnTlyclmVi87ejjZzfzwZfV47/E1gl1HulnZFIttP8APDN6cLWkacoPMZ6l4ZrGq1JRnB5fjfJhFNQk4y+Jb48llJ61qjJPs4saWZ2OlTTeFLfwzptb+5spZoVZRXePKf4HF6qeMrV7lZ1XTaxnHhksdZy/r6il9Io1oabiHpz/AJov4X/g9K1q6oLG65yfEKopLPC8nZ0/qdSxrLEm6Ofihz+Ri4/jp5SvuVLYhvY5aVzGpCM4tOMllNdy06uEYNOG9SqXEY9uTivFmLR0a/Vuaku0Fg5byWwhXylRaK04PjUzqsLVVriKk1GmvinJ8RiuWYVVquaku2p/ibur6dt6MOZvNRr9Ednm+1+oXf1y5c4rFNfDTi+0UcM3l7GyWPc53sypWlGp6c09Kks7xlwzr+vSneOs4xip4TjHhHAmXbygbe9RrTuK9KnSelYcZ4+9HG8T0ruNTp3QL71KmKV2/ToUm/6ufyR5XQYyueo2sYrFOjL1Kkn+x1fTCvK5vKMozUqEIaUl92XfP6GNdum+tvHo2Oq1ndOrTlGHMVLdfgcs6qa+FfidFCo3JOKTqLhNZz5XyMLl0al1L6tBwp52i3nH/Bpz+mFapKrPVN5fBnjJ6lpZUry2qw+zVhvGa4+TPOqU5UarhJfFEbNfamCCxD4CIGCfAwBAJZAAAAAAAAAAAAAAAAAAAAAAAAAAABQtCLnJJEJZZ2W1NRkmyW6jfHhc7p63S7ZU4ps9r1404nhwvI0YcnLcdTcsqLPFcMs7t+i4ufi+Nx6j2LrqagnueLc9TlNvDOCrXlUe7MXud8OKR835Hz887qNKlWVR7szAO8mnzcsrld0AAZAAAAAAAAAABCJAAAAAAAAAAAAAAAAAAgEgBklSaIAXbaFXBqppnIFJomm5nY7kSckarRrGsnyTTrM5WojJxezKqafckje/xq561hnFWhiWVwdBWUcoTpM75ztyAtOGllTby2aAAEAAAAIAkAACASAJGGzejZXFdpU6M5fJBZLfTFIM6qvTrqj/ALlCcfmjncJJ7rHzG1uNntTDLwk488BIlrJWWsWms5Kt/EUUGW07+4VpBx1fE8LyXjKLeDLTkmK0yTayiabnJXSkksvgmlV01E4vDXcxnNz9l2Rva2dSq1KU4UaXepVeEv8AJNfrV5Lv/FWacqkpPcwSbqvdYNK9WFJ1KdOoqjUmlOK2l7nGpNZNRyttu63y5yko/C15KrZaZbPyZqo203z5LxbccdvlkIvFTbzGO0e/gZWlYe2e3gr8OHhJPykTCSSw3KfsuComMsrxh9yK32ljZYJUKjmpcCUWsObz22CqwlKDynguqkcrPwP24M5PDWMFGsv3Isunq2vU7mxxGLUqec6Xun8j1n1ulXhiDcaj+7I+Wp1ZUnwmvD4Zo5QqNKK05+63/cxcduuPI+toL0qD1fae7Z53UbpU4ZW8uyPLh1C4to6HNyivuyMaly7hqb5XKJMVyzRL4Fl89v8AJSGW8sicnJ7vLZeL22R0cUyelGEuTfQ5e78I2h027qLVG3mo/wA0lpX6g1tx6XtsaQp1K9WMKNOU54woxWWz2qH0fp+nGrfX9GhS5ljfSvGfPsjap13p3S4ypdGtHJvmtV5l/f8AYzv8amH69Tp3RKtC3hRVSNHPxVJPeUmY38+gdObjcyd7WX3FLOPy2R81edXv73KrXElB/ch8KOBR8mZjfut3PGdYx6F91iV0pUra3pWlu9nClHeS933OajCnjnLKUmo1c42SLySbyvhflGvTGre3oSs7iz6Y7inVxGosyivB48pOU25btnVVv7iVJUXLFNLTp8nJjcRKlkPgkhlZOw3CJAqyCXyQAAAAnBGD1bDpErmn6lWThB8YW7M5ZzCbrtw8OfNl44TdeZpDi0s4eM4yfRfUbS1g5SpqWPvVHk8S7ufrFX4cRpR2hFLCRjDk87078/xLwY7zvf45gAdXiAQSAAAADsAABAEgFkmwqMFlBs1jS8mmlRRLXXHj/WcYKIdVrgrUnjYybyTW1uXj1F5VW+WUbyQDWnO5WgADIAAiCe4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACCQAAAEqTRZVWihAalsbxreS6qpnMEyaamddMkpowlHSI1Gi0pKSBdZMwQSVzAAAIJAAAACYLMgll4RvCOle4bxx3W1CsqDT9OMn7np0fpDUoL4aMfzPHckjOVTPBi4y+3eclw9PTvuu3F099MV4R5c6s6jzJlXySnuakk9OOfJcvYmA9hvyac0lovchcBPcDaJ69l9Hb28UZaFRpy4lU2z+B4ik0XnXq1XmrVqTf9U2xd/SzX2+iu+hUbCmtF9ZzqtbutUwo/KPf8TzZdOp156rnrVmn5zKX7I81JZ4RZEmN/Vuc/HovpXTYSz/rdtNePSmclajRpzxTnTqR/miZLDeMJ/NGqo0pLOhL5GpjWbnPxVUW1lU0aOwqyoqrGjmD21RkuSf4sMaKmV4nuXV1UUJQnS+GXOndfMllhLjXHOhUpNOSqQf9SI9Sov5WvkejDqU3iEZJvj4i9ScZQaqW9Fz5yljJnbXj+PNVTO0ouPujOWze+UzrbtZxyo1IP2eUY+iqk3GGHLspbZNbZ1WD2C3L1KMqf2oSjnuUxjhgMkNjDNKFvVuaip0oOUu+O3u/AFHKU8R5a4ZvbWla6n6VtSlUljL0o+j6b0C2tqeu8nGtUnjSo/ZX+T141aFupNRwqeYtxWM/gc7n+O049+3zFl9Hp11Gpc14Uact0l8Tf9kerbdEtKcFKdJzePvzyn+Xc3neKtH+BCMqecpwm4HPedTlRpVPUnCMpLEMS1v5mbcq1McY67ajRtNdWCjCnJLG2DyeqdbpVZ/CvVcdopv4V/k8296nUuoRprMKMFhRzz8zzpS1M3MfusZcn1GlxdVbmeqrNyxwuy+SMtSxhL8SCO5tz20S2J4WSurbCIbbwgjSm1pfklbsKm+UtyEzLpMutFTbfBnCemak4xl7NbG0sOJzlhl1du2NS3qrDowg/Y5KsdE2k8rsVzgN55JJoyz8p2Ikgkrmq+SCXyQAAAHb0u0V1dLX/tw3l7+x7dz1SharTnMv5Y9jxKVy7Wz009qlV5b8I45Nt5b3PPlxf3Mt5en1eL5c+LxePHP8r7rqveoVLyW/wwXETkAO+OMxmo+dycmXJlcs7ugAK5gAAAAAAQBICWTanSzyGscbVYU3I6IU0i0YpIltIxa9GOEhwjGrUwhUq42RzttvJZGc89dRDeWADTgAAIAAAAAAAAAgkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQSAAAAAAAQSAATwAFAAEAAAAAAABWsElyTKquEZamyBpry61E5yyVyVRbBWdpwguSCQg9y0XhNMqFyABOdy2yWWwsm17ag7itGmpKOe8nsjW8tfqtd01NTWM6o8HJreS0ZZ7iey60t3LIgjUlyzTDVNF1Jrgw9RIlVY9s/kXaWOqMsolyOeEsPOHgtKqlyma8mfGryjGf2kmWjOrSXwtTj/LP/ACYKtFctr5lvWi1tJMl1fazc9InKLm5RTpt8wlx+BWKaWp8ou5KSw1lGLg0npeF4Zi4/jcy/XVWquq4yyltwjCc9XJEZOTw1hnbQoQptSqxTfh8RMXp0natl053eZ1JKlSW7k+X8j1Kc5WdGP1dpUXtiL3fzx/crTqYTi3JQfeE917nTChBvVOPqPhvjV4zg5279ukx16dHrKjGjOrc4pSynGa+1n9mYXdeagoas1IvaWPh52S8kObTqf9vlz7S04k/J4d3eynN06Uk1w5R2S9l/kSbXK6b3N3GnFU4S1zXLxsmedKTk2292RhJFJS7I6yacbdonPOy4KllDPOxZYS2KyootkNNFnLwE33AiL3Lpbpldk8pbhSbYHRRq6Ki1bxexStiFR4Wz3IccRcnwuPdnXKwlPpMLpPLTeV7Eajhm8xM8E5CKlu0MBgIEvggZAhkEgggkgkC05ann9PBQALbs7AAIAAAAAAAAAF6UNTz2CybaUafdm6RXUoorKskZ9vTNYxeUtKMKlXOyM5VHJlSyOeXJ+DeSACuKSCe4AAAAAAAA7gAAAAAAAAAABBICAAgkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFyABaWOxUAKAE4KiASCCMAkAQCQUFySIrcnAAAkCASMAQyMbFiUBXBOE02nuJcYRTJGtraW1yXhTX3v0Kwlthl1sVlOIrhEqX4EZAFm3yzKcm2Wm+xXS++y9wI1ZW+5GnPG5ZJfP5k4b2y0vZAZ7xZeMZNZbaXzKy05xHMn7msI4gs7p8kXTqttMUnJLbyTN1KlZenNZxjDMcOH3nhhVmpL90Z01t6EIzpwzUmvfTsa0Lmpqzq+CL1Sb4x7nlyuZKfnwvJnUuqlWHp5SjnOEuf8k8WvLTq6j1Od3N0qOY0uPeRyJKKEY6VnuVll/I3Jpzt2OWXhceSMxRGyJTS7FRLllbIrjyXxq7lXH3Ah4XBXJZoq0BBtFKMdT57IzUe74EpZCkpOWEexO+p0FGg45iopZPF7lnJy3f6kNtbmnGFTMHmEt0YkttoJFEEFmgkEQC2CGBXBBZkAQCWQAZBLIAAAgAAAQSAAAAJbmnqYWEZgNS6S5tkZACbAO4CAIJAAAAAAAAAAAAAAABAEgAACCQAQCAIAAAAAAAAAAAAAAAAAAAAAAAAAYAAnDGAIBbBGAIBJOCioJADGwJXAaAgE4GAAJIwAIwTgkCASRgCYokRWCQBBbAwBAJAEMJNvGC2AsxeU8AUksTxkrLHYmcnKTb5IwRUZNIz8meC2Co0ymSimrCwiMtgaJ4ZEpRby1llcY55CWpgXi0Rq1TS7e4w3suEMZXhruQWjBRntjfybfC4rb8CmPhTzl+xDk0/2QaWrSa+zjBnFYWe7J3b347IzlLLYTaJSzsvzCWle5KWFkrnJReDyn5KtsmHJDTy9giCSCUBeK2DJjwRLkCjGMbssklyQ9wIc/Y1trdV225JIxaITa3WwVetRlRnpkvxI+4iXXlKOmb1L37FAie5ZI66PTXc28alvWhOf3qecSRhOlOlJxqRcZLs0NmrGTQW5aSCQEFWXZVoCpBbBAEAACASAIBJAEAsGBUAEAAAAAAAGAAGBgAAAAAAAAAAAAAAAAAAAAAAAAAECUBAAAAnAwBAJwMAQCcACASCiASCCASMFEAkEEAkYAkAFE5IzkYAVBIGAgASBAJAEAkAAAAAABInAROQIwEi2Q8oAlktjBESwFcAtjI9N6c9gSW+lcE4ICAsRP7LLbYKz+yBm4YSfkhIsxyBUlIthLnkjOOAJSRLwirkWi4PZxfzCybV5ZdLPBTK1bPbyaxawEVWxZZcudiHjOxpFYQELK+XgnYNlGBEiuEtySrAh5l7EqK+YYi34AutiJTUZeRn2IeG+MgPUg+UNKe6/UjdbpJEPPfcC62IyI8E4AgYJwAKtEYNMFQM2gXaIALMXlPD8o1dzUksTk5LxLczZCWWBdyTCIaQQB8kMl/IgCMkMkYYFSCzIAgFiAIBIAhBkjsBUEgCASMAQQWwQBGScjBIEDIwABBOBgCCRgEEYBIwBAGCcAQBgkCAAAAwAAAAAAASiCUAwST+AKKgkgAACACSAJ2GwGCgBgYAAYJw8ZAgAAASkWUE/vYApgtpNY2+t4U1kv9TqeSbXTDTtyNCfc3+p1CPqlQbNMdC8jSvJv9TqMn6lVGzTmaQOn6jV9g7Gr7Daac6jnuQ44Oj6nV8G9t0+UqiU5Rh7sWyNY43K6jgwSo5eD2J9JrSfwypSivbByVun1IVGoL4fmZmcvp1z+PyYTeUZQtFJZdSPyMpUcPCeTf6lV8pE/UanlGtuOq5/Rl3aX4keklzJHT9Sq+xP1WouVF/gNmq5NHiSLKm5PCa/M7PQkvu0/wAUTGhN/dp/hHI2ac0rOrFZUdS/peTJprZnrQUaa+Oo4/Kl/wAnPKjZ5bdWeP8A4jZquahSVSSTnGK8s6J2tGEW1cxk/CRZULNtYrtL3izVWto+LiP4jZpw+mn95L5kaMr7f4Hq07CjNJxqKS9max6XTxnTKWPA2arx4W0Z/wDqY/8A6m07XRHGcv3jg9iHTVGPwaopi4tKlVJy0ya23JtqSx4lOynN4/bBhUpODafKPYdlXjl0lTi37nM+kXMnvKH/ANi7Z081QzwyHGUX4PUh0a6eylFfIv8A6HcvOZwz7jcNV5WlYy+Rs+x6j6HcL70CP9FrrmcBuGq8tx9i0XHGMbHpLo9b/wByCLR6LPHxVl+ERuLjbHkSj8Txx7EpHsx6PBSWuvLHtA6l0uzi3hTl82NxNWvAXyNI8HtfUqMXmEfzwYVLaCbWmLY2aeXJFGj0J2k0tkjGVrXXH7lTTja3JUTd2tbnQ2QoSp4dSnLGeAaY43wluMNex1unRck9L+SZFZKpCMaVNJpmfJ2y4fGW2uKSZC53Oj6tVbxp/Un6nXzjR+qNbcdMNsB9kdH1G4fFJ/mif9Pum8ei/wAwaYaEuJfmVbw8HV/p14v/AEZfmR/pt1/7D/Fkavf05sjJ307e8pxwreL92isrC6qScnTgn4i8DZcZrpxZG52Lply/uL8zSPR7lvfSvxG4zqvPwIx7vg9J9IrLmUEWXRp/+7D8mLYsn8PMlBNbPAwscnsR6XUUdKqQXl6clX0iT5rL8Imdxv8A5HlqEWsuol7YJ9KEd/Wj+TPXj0huKi6zSXgiXRo5y6kpfMvlGfGvGlhfeT+RCWcnsPpFGOMz58h9Iim8TQ8oeNeTGlKSyovB0SsJtJwX5tHW+mpPaqk/2FSxnJLVcvYbNPPdjWXLj+ZSVrOOMuP5nVKzgnvXyYyt6cf/AFX+Rds6YSouK3lH8GRoz95fiaulSyv4m3yDpUe1V/8A1KMdKzjUhpW/xE4UZprdEN5b2AjHuMLyABGPcEt+xGpfygCCc+yGfYCATq9l+Q1MCMDAyxkAAAAAAAAAQSAAAAgEgCASAIAAAEkAQCQACBKIP//Z" />
These are GGUF quantized versions of [sophosympatheia/Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3).
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later.
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` |
yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 | yaneq | 2024-02-07T06:10:46Z | 1 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-07T06:10:43Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4
<Gallery />
## Model description
These are yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 700
- learning_rate: 0.0001
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls: - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- gradient_accumulation_steps: 3
- GPU: T4
- duration: 5284.340887546539
|
logeeshanv/Llama-2-7b-chat-hf-sharded-bf16-5GB-fine-tuned-adapters | logeeshanv | 2024-02-07T06:07:59Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16-5GB",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16-5GB",
"region:us"
] | null | 2024-02-07T05:46:50Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16-5GB
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
ZiHDeng/peft-lora-starcoder1B-Instruction-ny8-ALL | ZiHDeng | 2024-02-07T06:07:53Z | 5 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2024-02-07T03:55:10Z | ---
license: bigcode-openrail-m
library_name: peft
tags:
- generated_from_trainer
base_model: bigcode/starcoderbase-1b
model-index:
- name: peft-lora-starcoder1B-Instruction-ny8-ALL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-starcoder1B-Instruction-ny8-ALL
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1891 | 0.05 | 100 | 0.1452 |
| 0.1244 | 0.1 | 200 | 0.1096 |
| 0.1077 | 0.15 | 300 | 0.1006 |
| 0.0996 | 0.2 | 400 | 0.0958 |
| 0.0953 | 0.25 | 500 | 0.0927 |
| 0.0916 | 0.3 | 600 | 0.0882 |
| 0.0875 | 0.35 | 700 | 0.0867 |
| 0.0845 | 0.4 | 800 | 0.0873 |
| 0.0818 | 0.45 | 900 | 0.0863 |
| 0.0788 | 0.5 | 1000 | 0.0848 |
| 0.0781 | 0.55 | 1100 | 0.0844 |
| 0.0749 | 0.6 | 1200 | 0.0847 |
| 0.0726 | 0.65 | 1300 | 0.0849 |
| 0.0688 | 0.7 | 1400 | 0.0867 |
| 0.0701 | 0.75 | 1500 | 0.0861 |
| 0.0662 | 0.8 | 1600 | 0.0863 |
| 0.0658 | 0.85 | 1700 | 0.0867 |
| 0.0647 | 0.9 | 1800 | 0.0869 |
| 0.0644 | 0.95 | 1900 | 0.0870 |
| 0.0657 | 1.0 | 2000 | 0.0870 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
shnl/llama2-7b-vicoqa | shnl | 2024-02-07T06:01:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-7b",
"base_model:adapter:manhtt-079/llama-2-7b",
"region:us"
] | null | 2024-02-07T05:59:24Z | ---
library_name: peft
base_model: manhtt-079/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
shnl/llama2-13b-vimmrc2.0 | shnl | 2024-02-07T05:57:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-13b",
"base_model:adapter:manhtt-079/llama-2-13b",
"region:us"
] | null | 2024-02-07T05:56:13Z | ---
library_name: peft
base_model: manhtt-079/llama-2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
yeye776/OndeviceAI-large | yeye776 | 2024-02-07T05:57:09Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-large",
"base_model:finetune:paust/pko-t5-large",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-07T05:54:55Z | ---
license: cc-by-4.0
base_model: paust/pko-t5-large
tags:
- generated_from_trainer
model-index:
- name: OndeviceAI-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OndeviceAI-large
This model is a fine-tuned version of [paust/pko-t5-large](https://huggingface.co/paust/pko-t5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
shnl/llama2-7b-vimmrc2.0 | shnl | 2024-02-07T05:55:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-7b",
"base_model:adapter:manhtt-079/llama-2-7b",
"region:us"
] | null | 2024-02-07T05:54:02Z | ---
library_name: peft
base_model: manhtt-079/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
shnl/llama2-7b-vimmrc1.0 | shnl | 2024-02-07T05:50:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-7b",
"base_model:adapter:manhtt-079/llama-2-7b",
"region:us"
] | null | 2024-02-07T05:48:59Z | ---
library_name: peft
base_model: manhtt-079/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
shnl/llama2-13b-viquad | shnl | 2024-02-07T05:47:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-13b",
"base_model:adapter:manhtt-079/llama-2-13b",
"region:us"
] | null | 2024-02-07T05:33:01Z | ---
library_name: peft
base_model: manhtt-079/llama-2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
ideepankarsharma2003/AI_GenImageClassifier_MidJourney | ideepankarsharma2003 | 2024-02-07T05:45:48Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2024-01-30T11:28:52Z | # **Not a MODEL, just a practice repo** |
shnl/llama2-7b-viquad | shnl | 2024-02-07T05:32:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-7b",
"base_model:adapter:manhtt-079/llama-2-7b",
"region:us"
] | null | 2024-02-07T05:31:04Z | ---
library_name: peft
base_model: manhtt-079/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
nopperl/emissions-extraction-lora-merged-GGUF | nopperl | 2024-02-07T05:28:33Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-06T20:14:46Z | ---
license: apache-2.0
---
[emissions-extraction-lora](https://huggingface.co/nopperl/emissions-extraction-lora) merged with the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2), converted into GGUF format and quantized. Can be used with llama.cpp.
|
AnithaThilak/Cyberbullying-detection-tweet-comment | AnithaThilak | 2024-02-07T05:25:21Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:sreeniketh/cyberbullying_sentiment_dsce_2023",
"base_model:finetune:sreeniketh/cyberbullying_sentiment_dsce_2023",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-06T09:24:03Z | ---
license: gpl-3.0
base_model: sreeniketh/cyberbullying_sentiment_dsce_2023
tags:
- generated_from_trainer
model-index:
- name: Cyberbullying-detection-tweet-comment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cyberbullying-detection-tweet-comment
This model is a fine-tuned version of [sreeniketh/cyberbullying_sentiment_dsce_2023](https://huggingface.co/sreeniketh/cyberbullying_sentiment_dsce_2023) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
shazzz/ppo-LunarLander-v2 | shazzz | 2024-02-07T05:23:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T05:23:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.23 +/- 20.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cvzion/mistral-dqg-v3 | cvzion | 2024-02-07T05:21:52Z | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T04:24:52Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
debajyotidasgupta/convnextv2-base-22k-384 | debajyotidasgupta | 2024-02-07T05:20:08Z | 179 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-base-22k-384",
"base_model:finetune:facebook/convnextv2-base-22k-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-04T15:27:03Z | ---
license: apache-2.0
base_model: facebook/convnextv2-base-22k-384
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
model-index:
- name: convnextv2-base-22k-384
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 0.9913113141099743
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnextv2-base-22k-384
This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0069
- F1: 0.9913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1521 | 1.0 | 202 | 0.0982 | 0.8278 |
| 0.0664 | 2.0 | 404 | 0.0626 | 0.9079 |
| 0.1053 | 3.0 | 606 | 0.0356 | 0.9537 |
| 0.0432 | 4.0 | 808 | 0.0302 | 0.9703 |
| 0.0552 | 5.0 | 1010 | 0.0114 | 0.9827 |
| 0.0352 | 6.0 | 1212 | 0.0131 | 0.9824 |
| 0.0221 | 7.0 | 1414 | 0.0063 | 0.9943 |
| 0.0018 | 8.0 | 1616 | 0.0169 | 0.9824 |
| 0.0283 | 9.0 | 1818 | 0.0028 | 0.9971 |
| 0.0429 | 10.0 | 2020 | 0.0069 | 0.9913 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu102
- Datasets 2.16.1
- Tokenizers 0.15.1
|
chenhaodev/mistral-7b-mmlu-v1 | chenhaodev | 2024-02-07T05:17:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-07T05:03:57Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-mmlu-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-mmlu-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medical_meadow_mmmlu dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-mmlu-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.47|± |0.0502|
|professional_medicine| 0|none | 0|acc | 0.79|± |0.0409|
|college_medicine | 0|none | 0|acc | 0.72|± |0.0451|
|clinical_knowledge | 0|none | 0|acc | 0.72|± |0.0451|
|aocnp |Yaml |none | 0|acc | 0.56|± |0.0499|
|ocn |Yaml |none | 0|acc | 0.66|± |0.0476|
|
theidoldaily/maki-nishikino | theidoldaily | 2024-02-07T05:17:44Z | 7 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:mit",
"region:us"
] | text-to-image | 2024-02-05T05:18:09Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
defined eyes, masterpiece, high quality, defined pupil, looking at viewer,
rounded pupil,
parameters:
negative_prompt: >-
bad_anatomy, deformation, amputation, deformity, deformed_nipples,
duplicated_torso, deformed_torso, long_torso, large_torso,
unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2),
unproportioned_eyes, unproportioned_head, small_head, duplicated_nose,
big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy,
red_pussy, duplicated_pussy, deformed_anus, deformed_pussy,
output:
url: demo-1.png
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: id_maki_nishikino
license: mit
---
# Maki Nishikino
<Gallery />
## Model description
This model was trained to generate high quality images based on SIFAS cards.
To achieve better quality, you should be using hako-mikan's regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement.
## Trigger words
You should use `id_maki_nishikino` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/theidoldaily/maki-nishikino/tree/main) them in the Files & versions tab.
|
heshamourad/marian-finetuned-kde4-en-to-fr | heshamourad | 2024-02-07T05:12:53Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-02-07T03:29:08Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.930569776237235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8552
- Bleu: 52.9306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ealvaradob/bert-finetuned-phishing | ealvaradob | 2024-02-07T05:11:47Z | 3,247 | 13 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"phishing",
"BERT",
"en",
"dataset:ealvaradob/phishing-dataset",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-20T18:31:54Z | ---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
- phishing
- BERT
metrics:
- accuracy
- precision
- recall
model-index:
- name: bert-finetuned-phishing
results: []
widget:
- text: https://www.verif22.com
example_title: Phishing URL
- text: Dear colleague, An important update about your email has exceeded your
storage limit. You will not be able to send or receive all of your messages.
We will close all older versions of our Mailbox as of Friday, June 12, 2023.
To activate and complete the required information click here (https://ec-ec.squarespace.com).
Account must be reactivated today to regenerate new space. Management Team
example_title: Phishing Email
- text: You have access to FREE Video Streaming in your plan. REGISTER with your email, password and
then select the monthly subscription option. https://bit.ly/3vNrU5r
example_title: Phishing SMS
- text: if(data.selectedIndex > 0){$('#hidCflag').val(data.selectedData.value);};;
var sprypassword1 = new Spry.Widget.ValidationPassword("sprypassword1");
var sprytextfield1 = new Spry.Widget.ValidationTextField("sprytextfield1", "email");
example_title: Phishing Script
- text: Hi, this model is really accurate :)
example_title: Benign message
datasets:
- ealvaradob/phishing-dataset
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT FINETUNED ON PHISHING DETECTION
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an [phishing dataset](https://huggingface.co/datasets/ealvaradob/phishing-dataset),
capable of detecting phishing in its four most common forms: URLs, Emails, SMS messages and even websites.
It achieves the following results on the evaluation set:
- Loss: 0.1953
- Accuracy: 0.9717
- Precision: 0.9658
- Recall: 0.9670
- False Positive Rate: 0.0249
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why
it can use lots of publicly available data) with an automatic process to generate inputs and labels from
those texts.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters
## Motivation and Purpose
Phishing is one of the most frequent and most expensive cyber-attacks according to several security reports.
This model aims to efficiently and accurately prevent phishing attacks against individuals and organizations.
To achieve it, BERT was trained on a diverse and robust dataset containing: URLs, SMS Messages, Emails and
Websites, which allows the model to extend its detection capability beyond the usual and to be used in various
contexts.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | False Positive Rate |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:-------------------:|
| 0.1487 | 1.0 | 3866 | 0.1454 | 0.9596 | 0.9709 | 0.9320 | 0.0203 |
| 0.0805 | 2.0 | 7732 | 0.1389 | 0.9691 | 0.9663 | 0.9601 | 0.0243 |
| 0.0389 | 3.0 | 11598 | 0.1779 | 0.9683 | 0.9778 | 0.9461 | 0.0156 |
| 0.0091 | 4.0 | 15464 | 0.1953 | 0.9717 | 0.9658 | 0.9670 | 0.0249 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1 |
FinancialSupport/saiga-70b | FinancialSupport | 2024-02-07T05:11:15Z | 8 | 0 | null | [
"gguf",
"it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-06T22:56:40Z | ---
license: apache-2.0
language:
- it
---
il saiga è uno strano incrocio di antilopi che vive nelle steppe siberiane.
Il nome deriva dal fatto che è un parente di fauno/camoscio e un lontano cugino di cerbero (altri modelli open source ita).
E' un progetto portato avanti nei weekend con pochi soldi/tempo a disposizione
 |
ybzz/detr-pothole-augment | ybzz | 2024-02-07T04:56:57Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-02-07T04:56:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
viai957/CodeLlama_7B-Fientuned | viai957 | 2024-02-07T04:56:22Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-07T04:45:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fionazhang/mistral-finetune-short | fionazhang | 2024-02-07T04:49:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-29T00:07:01Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-finetune-short
results: []
---
# mistral-finetune-short
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
It achieves the following results on the evaluation set:
- Loss: 2.0377
## Model description
This model is fine-tuned to specialize in generating content related to the environment and sustainability domain. The training involved Supervised Fine-Tuning (SFT), Parameter Efficient Fine-Tuning (PEFT), and Low-Rank Adaptation (LoRA) techniques to optimize model performance. The motivation behind this research is to explore the feasibility and effectiveness of Semantically Sufficient Private Large Language Models (LLMs) for secure, domain-specific knowledge extraction in the context of environment and sustainability.
## Intended uses
The model is intended for information retrieval and knowledge extraction tasks within the domain of environment and sustainability.
## Training and evaluation data
The training data consists of domain-specific text collected from Wikipedia pages related to environmental topics.
This model was trained using the Short dataset. [Model trained with the Long dataset](https://huggingface.co/fionazhang/mistral-finetune-long).
| **Dataset** | **URLs** | **Number of Rows** | **Number of Words** | **Number of Sentences** |
|-------------|----------|--------------------|----------------------|--------------------------|
| Short | 11 | 577 | 51,526 | 2,150 |
| Long | 23 | 1,431 | 124,682 | 5,209 |
**Table 1:** Summary of Dataset Information
### Environment and Sustainability
This model is tailored for the environment and sustainability domain, with a focus on assisting researchers and enterprises, particularly in alignment with the work of the Commonwealth Scientific and Industrial Research Organisation (CSIRO).
### Data Collection Process
The training data was collected through a Python program that extracted and cleaned text content from specific Wikipedia pages related to environmental topics. The program utilized various libraries, such as `requests`, `BeautifulSoup`, and `nltk`, for efficient web scraping, HTML parsing, and natural language processing.
## Training procedure
## Fine-tuning
The fine-tuning process involved Soft Fine-Tuning, PEFT, and LoRA techniques. Soft Fine-Tuning utilized continuous-valued probabilities as labels, suitable for generation models. PEFT focused on updating a small subset of parameters during fine-tuning to prevent catastrophic forgetting. LoRA, a lightweight training technique, reduced the number of trainable parameters for faster and memory-efficient training.
#### Low-Rank Adaptation (LoRA) Parameters
- lora_alpha: 16
- lora_dropout: 0.1
- r: 8
#### Training Parameters
- num_train_epochs: 2
- per_device_train_batch_size: 3
- per_device_eval_batch_size: 3
- gradient_accumulation_steps: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- learning_rate: 5e-05
- weight_decay: 0.001
- max_grad_norm: 0.3
- max_steps: -1
- warmup_ratio: 0.03
- group_by_length: True
- lr_scheduler_type: constant
- seed: 42
### Training results
#### Training Loss

*Figure 1: Training loss curve of model fionazhang/mistral-finetune-short (logging step = 10)*
In the training process, the observed training losses exhibit jittery yet overall decreasing trends. The final evaluation loss reaches a satisfactory value of 2.0377, indicating successful learning and adaptation to the nuances of the provided data.
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0a0+git7bcf7da
- Datasets 2.16.1
- Tokenizers 0.15.0 |
varun-v-rao/t5-large-bn-adapter-6.34M-snli-model1 | varun-v-rao | 2024-02-07T04:47:48Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"region:us"
] | null | 2024-02-06T21:11:35Z | ---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-large-bn-adapter-6.34M-snli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-bn-adapter-6.34M-snli-model1
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6034
- Accuracy: 0.8005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3118 | 1.0 | 17168 | 0.2381 | 0.9150 |
| 0.2742 | 2.0 | 34336 | 0.2299 | 0.9171 |
| 0.2725 | 3.0 | 51504 | 0.2277 | 0.9197 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli-model2 | varun-v-rao | 2024-02-07T04:46:51Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T02:22:08Z | ---
license: apache-2.0
base_model: bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-cased-bn-adapter-3.17M-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-bn-adapter-3.17M-snli-model2
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7747
- Accuracy: 0.731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4017 | 1.0 | 8584 | 0.3327 | 0.8763 |
| 0.3769 | 2.0 | 17168 | 0.3069 | 0.8881 |
| 0.3641 | 3.0 | 25752 | 0.3005 | 0.8895 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
varun-v-rao/t5-base-bn-adapter-1.79M-snli-model3 | varun-v-rao | 2024-02-07T04:42:15Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T02:16:46Z | ---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-bn-adapter-1.79M-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-bn-adapter-1.79M-snli-model3
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7044
- Accuracy: 0.7455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 79
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4101 | 1.0 | 8584 | 0.3336 | 0.8763 |
| 0.3814 | 2.0 | 17168 | 0.3112 | 0.8858 |
| 0.3695 | 3.0 | 25752 | 0.3061 | 0.8883 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ealvaradob/bert-phishing-text | ealvaradob | 2024-02-07T04:37:15Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"dataset:ealvaradob/phishing-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-28T19:06:47Z | ---
license: apache-2.0
datasets:
- ealvaradob/phishing-dataset
---
<strong><span style="color:red">WARNING ...</span></strong>
This is **NOT** the final BERT model trained for phishing detection. It only corresponds to an evaluation of BERT performance against email and SMS samples.
This model has the following performance in email and SMS phishing detection:
- Accuracy: 0.990318
- Precision: 0.990170
- Recall: 0.984365
- AUC: 0.999146
👇¡CHECK BERT FINAL MODEL FINETUNED FOR PHISHING DETECTION ON THE FOLLOWING LINK!👇
_https://huggingface.co/ealvaradob/bert-finetuned-phishing_ |
ealvaradob/bert-phishing-url | ealvaradob | 2024-02-07T04:36:27Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"dataset:ealvaradob/phishing-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-01-28T19:02:38Z | ---
license: apache-2.0
datasets:
- ealvaradob/phishing-dataset
---
<strong><span style="color:red">WARNING ...</span></strong>
This is **NOT** the final BERT model trained for phishing detection. It only corresponds to an evaluation of BERT performance against URL samples.
This model has the following performance in URL phishing detection:
- Accuracy: 0.976815
- Precision: 0.985979
- Recall: 0.964295
- AUC: 0.996684
👇¡CHECK BERT FINAL MODEL FINETUNED FOR PHISHING DETECTION ON THE FOLLOWING LINK!👇
_https://huggingface.co/ealvaradob/bert-finetuned-phishing_ |
spsither/wav2vec2_run9.15 | spsither | 2024-02-07T04:33:30Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-07T04:32:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct | Telugu-LLM-Labs | 2024-02-07T04:24:52Z | 173 | 13 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"te",
"en",
"dataset:Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized",
"dataset:Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-06T12:07:42Z | ---
license: llama2
datasets:
- Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized
- >-
Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized
language:
- te
- en
---
# Telugu-Llama2-7B-v0-Instruct
This model is based on [Telugu-Llama2-7B-v0-Base](https://huggingface.co/Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Base) and hase been finetuned on instruction datasets:
1. [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized)
2. [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized)
# Input Text Format
```
### Instruction: {instruction}
### Input: {input}
## Response: {response}
```
# Usage
## With Romanized Telugu
```python3
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name = "Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="right")
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device)
instruction = "Krindi samaacharam prakaram google app eppudu release ayyindi?"
input ="Google News is a news aggregator service developed by Google. It presents a continuous flow of links to articles organized from thousands of publishers and magazines. Google News is available as an app on Android, iOS, and the Web. Google released a beta version in September 2002 and the official app in January 2006."
text = f"""Instruction: {instruction} \nInput: {input} \nResponse:"""
encodings = tokenizer(text, padding=True, return_tensors="pt")
encodings = encodings.to(device)
with torch.inference_mode():
outputs = model.generate(encodings.input_ids, do_sample=False, max_new_tokens=500)
output = tokenizer.batch_decode(outputs.detach(), skip_special_tokens=True)
```
### Sample Output:
```
1. September 2002 Google released a beta version of Google News.
2. January 2006 Google released the official version of Google News.
```
## With Native Telugu
```python3
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name = "Telugu-LLM-Labs/Telugu-Llama2-7B-v0-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="right")
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to(device)
instruction = "కింది వచనాన్ని సంగ్రహించండి"
input="గూగుల్ వార్తలు అనేది గూగుల్ ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ. ఇది వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్లను అందిస్తుంది. గూగుల్ వార్తలు Android, iOS మరియు వెబ్లో యాప్గా అందుబాటులో ఉన్నాయి. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్ను మరియు జనవరి 2006లో అధికారిక యాప్ను విడుదల చేసింది."
text = f"""Instruction: {instruction} \nInput: {input} \nResponse:"""
encodings = tokenizer(text, padding=True, return_tensors="pt")
encodings = encodings.to(device)
with torch.inference_mode():
outputs = model.generate(encodings.input_ids, do_sample=False, max_new_tokens=500)
output = tokenizer.batch_decode(outputs.detach(), skip_special_tokens=True)
```
### Sample Output:
1. గూగుల్ వార్తలు అనేది గూగుల్ ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ, వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్లను అందిస్తుంది.
2. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్ మరియు జనవరి 2006లో అధికారిక యాప్ ను విడుదల చేసింది.
# Developers:
The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.
# Note:
The model is quite sensitive to parameters and inputs and is not yet ready for production. It remains in the experimental phase, and we recommend using it accordingly. |
sneakykilli/Emirates_BERTopic | sneakykilli | 2024-02-07T04:18:55Z | 3 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-02-07T03:53:01Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# Emirates_BERTopic
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("sneakykilli/Emirates_BERTopic")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 11
* Number of training documents: 375
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | emirates - airline - airlines - flights - refund | 9 | -1_emirates_airline_airlines_flights |
| 0 | emirates - airlines - airline - dubai - flights | 100 | 0_emirates_airlines_airline_dubai |
| 1 | airline - airlines - flights - aviation - planes | 68 | 1_airline_airlines_flights_aviation |
| 2 | emirates - meals - meal - attendant - airline | 35 | 2_emirates_meals_meal_attendant |
| 3 | emirates - refund - cancel - booking - ticket | 34 | 3_emirates_refund_cancel_booking |
| 4 | airline - refunded - refund - ticket - booking | 28 | 4_airline_refunded_refund_ticket |
| 5 | emirates - dubai - baggage - luggage - airline | 26 | 5_emirates_dubai_baggage_luggage |
| 6 | emirates - airline - refund - seats - flights | 26 | 6_emirates_airline_refund_seats |
| 7 | emirates - airlines - airline - booking - fees | 23 | 7_emirates_airlines_airline_booking |
| 8 | passengers - airline - emirates - stewardess - aisle | 14 | 8_passengers_airline_emirates_stewardess |
| 9 | emirates - delayed - dubai - delays - flights | 12 | 9_emirates_delayed_dubai_delays |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 5
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.24.3
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.3.1
* Transformers: 4.36.2
* Numba: 0.57.1
* Plotly: 5.16.1
* Python: 3.10.12
|
wentingzhao/question-evaluator | wentingzhao | 2024-02-07T04:12:53Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-05T04:50:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chenhaodev/mistral-7b-medmcqa-inst-v1 | chenhaodev | 2024-02-07T04:06:07Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-07T03:31:34Z | ---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-medmcqa-inst-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-medmcqa-inst-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medmcqa_instruct dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medmcqa-inst-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.48|± |0.0502|
|professional_medicine| 0|none | 0|acc | 0.61|± |0.0490|
|college_medicine | 0|none | 0|acc | 0.57|± |0.0498|
|clinical_knowledge | 0|none | 0|acc | 0.65|± |0.0479|
|ocn |Yaml |none | 0|acc | 0.68|± |0.0469|
|aocnp |Yaml |none | 0|acc | 0.56|± |0.0499|
### Original Performance (mistralai/Mistral-7B-v0.1)
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.45|± |0.0500|
|professional_medicine| 0|none | 0|acc | 0.64|± |0.0482|
|college_medicine | 0|none | 0|acc | 0.65|± |0.0479|
|clinical_knowledge | 0|none | 0|acc | 0.68|± |0.0469|
|ocn |Yaml |none | 0|acc | 0.62|± |0.0488|
|aocnp |Yaml |none | 0|acc | 0.47|± |0.0502|
|
cvzion/mistral-dqg-v2 | cvzion | 2024-02-07T04:00:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T03:58:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/DeepMagic-Coder-7b-GPTQ | LoneStriker | 2024-02-07T03:57:36Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T03:55:46Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
houdini001/nep-spell-mbart-epoch5 | houdini001 | 2024-02-07T03:55:54Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:houdini001/nep-spell-mbart-epoch3",
"base_model:finetune:houdini001/nep-spell-mbart-epoch3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-06T19:18:48Z | ---
tags:
- generated_from_trainer
base_model: houdini001/nep-spell-mbart-epoch3
model-index:
- name: nep-spell-mbart-epoch5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nep-spell-mbart-epoch5
This model is a fine-tuned version of [houdini001/nep-spell-mbart-epoch3](https://huggingface.co/houdini001/nep-spell-mbart-epoch3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0026 | 0.32 | 2000 | 0.0001 |
| 0.0 | 0.63 | 4000 | 0.0001 |
| 0.0 | 0.95 | 6000 | 0.0000 |
| 0.0 | 1.27 | 8000 | 0.0000 |
| 0.0 | 1.58 | 10000 | 0.0000 |
| 0.0 | 1.9 | 12000 | 0.0000 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
frntcx/Reinforce | frntcx | 2024-02-07T03:50:28Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T03:50:21Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 348.70 +/- 57.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
weijie210/zephyr-7b-UFB-0 | weijie210 | 2024-02-07T03:49:39Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T01:25:02Z | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-UFB-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-UFB-0
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1492
- Rewards/chosen: -1.5452
- Rewards/rejected: -7.2115
- Rewards/accuracies: 0.8359
- Rewards/margins: 5.6663
- Logps/rejected: -171.0846
- Logps/chosen: -143.6666
- Logits/rejected: -2.3237
- Logits/chosen: -2.3692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/DeepMagic-Coder-7b-6.0bpw-h6-exl2 | LoneStriker | 2024-02-07T03:31:53Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T03:29:42Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
car13mesquita/bert-finetuned-sem_eval-rest14-english-2 | car13mesquita | 2024-02-07T03:30:42Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-07T02:51:04Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-sem_eval-rest14-english-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-rest14-english-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0972
- F1: 0.3594
- Accuracy: 0.6088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 127 | 0.2075 | 0.0 | 0.0 |
| No log | 2.0 | 254 | 0.1641 | 0.0802 | 0.2338 |
| No log | 3.0 | 381 | 0.1376 | 0.1519 | 0.395 |
| 0.1978 | 4.0 | 508 | 0.1233 | 0.1850 | 0.4213 |
| 0.1978 | 5.0 | 635 | 0.1115 | 0.2654 | 0.5238 |
| 0.1978 | 6.0 | 762 | 0.1052 | 0.3145 | 0.565 |
| 0.1978 | 7.0 | 889 | 0.1023 | 0.3371 | 0.5787 |
| 0.0922 | 8.0 | 1016 | 0.0988 | 0.3549 | 0.6025 |
| 0.0922 | 9.0 | 1143 | 0.0980 | 0.3561 | 0.6 |
| 0.0922 | 10.0 | 1270 | 0.0972 | 0.3594 | 0.6088 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
nightdude/ddpm-butterflies-128 | nightdude | 2024-02-07T03:29:40Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-07T03:27:23Z |
---
license: creativeml-openrail-m
base_model: anton_l/ddpm-butterflies-128
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - ddpm-butterflies-128
These are LoRA adaption weights for anton_l/ddpm-butterflies-128. The weights were fine-tuned on the huggan/smithsonian_butterflies_subset dataset. You can find some example images in the following.
|
LoneStriker/DeepMagic-Coder-7b-5.0bpw-h6-exl2 | LoneStriker | 2024-02-07T03:29:39Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T03:27:46Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
LoneStriker/DeepMagic-Coder-7b-4.0bpw-h6-exl2 | LoneStriker | 2024-02-07T03:27:43Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T03:26:09Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
LoneStriker/DeepMagic-Coder-7b-3.0bpw-h6-exl2 | LoneStriker | 2024-02-07T03:26:07Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T03:24:51Z | ---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
``` |
theofcks/MATUE30PRAUM | theofcks | 2024-02-07T03:25:17Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-02-07T03:25:15Z | ---
license: other
license_name: nothing
license_link: LICENSE
---
|
trinath/LunarLander-v5 | trinath | 2024-02-07T03:23:09Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-07T03:21:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.79 +/- 17.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asadmasad/output-67b-11k-test | asadmasad | 2024-02-07T03:18:20Z | 4 | 1 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"text-generation",
"conversational",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T01:38:20Z | ---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
model-index:
- name: output-67b-11k-test
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output-67b-11k-test
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0051 | 1.0 | 1 | 0.0813 |
| 0.0051 | 2.0 | 2 | 0.0813 |
| 0.0051 | 3.0 | 3 | 0.0811 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
Sacbe/ViT_SAM_Classification | Sacbe | 2024-02-07T03:17:54Z | 0 | 0 | transformers | [
"transformers",
"biology",
"image-classification",
"arxiv:2010.11929",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-07T02:31:37Z | ---
license: apache-2.0
metrics:
- accuracy
- f1
- precision
- recall
library_name: transformers
pipeline_tag: image-classification
tags:
- biology
---
# Resumen
El modelo fue entrenado usando el modelo base de VisionTransformer junto con el optimizador SAM de Google y la función de perdida Negative log likelihood, sobre los datos [Wildfire](https://drive.google.com/file/d/1TlF8DIBLAccd0AredDUimQQ54sl_DwCE/view?usp=sharing). Los resultados muestran que el clasificador alcanzó una precisión del 97% con solo 10 épocas de entrenamiento.
La teoría de se muestra a continuación.

# VisionTransformer
**Attention-based neural networks such as the Vision Transformer** (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively. While the laws for scaling Transformer language models have been studied, it is unknown how Vision Transformers scale. To address this, we scale ViT models and data, both up and down, and characterize the relationships between error rate, data, and compute. Along the way, we refine the architecture and training of ViT, reducing memory consumption and increasing accuracy of the resulting models. As a result, we successfully train a ViT model with two billion parameters, which attains a new state-of-the-art on ImageNet of 90.45% top-1 accuracy. The model also performs well for few-shot transfer, for example, reaching 84.86% top-1 accuracy on ImageNet with only 10 examples per class.
[1] A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”. arXiv, el 3 de junio de 2021. Consultado: el 12 de noviembre de 2023. [En línea]. Disponible en: http://arxiv.org/abs/2010.11929
# Sharpness Aware Minimization (SAM)
SAM simultaneously minimizes loss value and loss sharpness. In particular, it seeks parameters that lie in neighborhoods having uniformly low loss. SAM improves model generalization and yields SoTA performance for several datasets. Additionally, it provides robustness to label noise on par with that provided by SoTA procedures that specifically target learning with noisy labels.

*ResNet loss landscape at the end of training with and without SAM. Sharpness-aware updates lead to a significantly wider minimum, which then leads to better generalization properties.*
[2] P. Foret, A. Kleiner, y H. Mobahi, “Sharpness-Aware Minimization For Efficiently Improving Generalization”, 2021.
# The negative log likelihood loss
It is useful to train a classification problem with $C$ classes.
If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either (minibatch, $C$ ) or ( minibatch, $C, d_1, d_2, \ldots, d_K$ ) with $K \geq 1$ for the $K$-dimensional case. The latter is useful for higher dimension inputs, such as computing NLL loss per-pixel for 2D images.
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects should be a class index in the range $\[0, C-1\]$ where $C$ number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range).
The unreduced (i.e. with reduction set to 'none ') loss can be described as:
$$
\ell(x, y)=L=\left\{l_1, \ldots, l_N\right\}^{\top}, \quad l_n=-w_{y_n} x_{n, y_n}, \quad w_c=\text { weight }[c] \cdot 1
$$
where $x$ is the input, $y$ is the target, $w$ is the weight, and $N$ is the batch size. If reduction is not 'none' (default 'mean'), then
$$
\ell(x, y)= \begin{cases}\sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text { if reduction }=\text { 'mean' } \\ \sum_{n=1}^N l_n, & \text { if reduction }=\text { 'sum' }\end{cases}
$$
# Resultados obtenidos
<img src="https://cdn-uploads.huggingface.co/production/uploads/64ff2131f7f3fa2d7fe256fc/CO6vFEjt3FkxB8JgZTbEd.png" width="500" /> |
Deepnoid/OPEN-SOLAR-KO-10.7B | Deepnoid | 2024-02-07T03:11:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:beomi/OPEN-SOLAR-KO-10.7B",
"base_model:finetune:beomi/OPEN-SOLAR-KO-10.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T01:46:52Z | ---
license: apache-2.0
base_model: beomi/OPEN-SOLAR-KO-10.7B
tags:
- generated_from_trainer
model-index:
- name: beomidpo-out-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: beomi/OPEN-SOLAR-KO-10.7B
load_in_8bit: false
load_in_4bit: false
strict: false
rl: dpo
datasets:
- path: datasets/dposet/dpodatav2.jsonl
ds_type: json
data_files:
- datasets/dposet/dpodatav2.jsonl
split: train
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./beomidpo-out-v2
adapter: lora
lora_model_dir:
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_steps: 10
save_steps: 100
save_total_limit: 3
debug:
deepspeed: deepspeed_configs/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
save_safetensors: false
```
</details><br>
# beomidpo-out-v2
This model is a fine-tuned version of [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2645
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
gokulraj/whisper-small-trail-5-preon | gokulraj | 2024-02-07T03:05:00Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ta",
"dataset:whisper-small-preon-test-1",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-07T02:17:45Z | ---
language:
- ta
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- whisper-small-preon-test-1
metrics:
- wer
model-index:
- name: Whisper small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: custom dataset
type: whisper-small-preon-test-1
metrics:
- name: Wer
type: wer
value: 11.920529801324504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the custom dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1046
- Wer Ortho: 11.8421
- Wer: 11.9205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.4335 | 5.0 | 100 | 0.1326 | 11.8421 | 9.2715 |
| 0.0049 | 10.0 | 200 | 0.1332 | 15.7895 | 13.9073 |
| 0.0001 | 15.0 | 300 | 0.1019 | 11.8421 | 11.9205 |
| 0.0 | 20.0 | 400 | 0.1041 | 11.8421 | 11.9205 |
| 0.0 | 25.0 | 500 | 0.1046 | 11.8421 | 11.9205 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Peverell/mnist-resnet18 | Peverell | 2024-02-07T03:02:19Z | 4 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T02:52:40Z | Dataset: MNIST
Model-architecture: ResNet-18
training accuracy: 0.9988
testing accuracy: 0.9934 |
matr1xx/scibert_scivocab_uncased-finetuned-molstmraw-mlm-0.3-5epochs | matr1xx | 2024-02-07T02:57:03Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_uncased",
"base_model:finetune:allenai/scibert_scivocab_uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-07T01:58:18Z | ---
base_model: allenai/scibert_scivocab_uncased
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_uncased-finetuned-molstmraw-mlm-0.3-5epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased-finetuned-molstmraw-mlm-0.3-5epochs
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8095 | 1.0 | 1265 | 0.6320 |
| 0.6481 | 2.0 | 2530 | 0.5629 |
| 0.5938 | 3.0 | 3795 | 0.5315 |
| 0.5664 | 4.0 | 5060 | 0.5132 |
| 0.5526 | 5.0 | 6325 | 0.5084 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.1
|
rhplus0831/maid-yuzu-v5 | rhplus0831 | 2024-02-07T02:52:28Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-06T18:20:26Z | This model was created because I was curious about whether the 8X7B model created randomly by the user would be merged with other existing 8x7b models.
Was this not suitable for the MoE's design? A problem occurred during the quantization process |
Krisbiantoro/merged_mixtral_id | Krisbiantoro | 2024-02-07T02:42:24Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"mixtral",
"arxiv:1910.09700",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-01-25T04:23:59Z | ---
library_name: peft
base_model: mistralai/Mixtral-8x7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
SparseLLM/reglu-95B | SparseLLM | 2024-02-07T02:34:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T07:12:12Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-70B | SparseLLM | 2024-02-07T02:31:59Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T06:44:43Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-65B | SparseLLM | 2024-02-07T02:31:37Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T06:41:43Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-50B | SparseLLM | 2024-02-07T02:30:46Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T06:26:08Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-40B | SparseLLM | 2024-02-07T02:30:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T05:47:31Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-30B | SparseLLM | 2024-02-07T02:29:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T05:40:16Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-20B | SparseLLM | 2024-02-07T02:29:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T05:33:06Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
BAAI/EVA-CLIP-8B-448 | BAAI | 2024-02-07T02:29:15Z | 27 | 12 | transformers | [
"transformers",
"pytorch",
"clip",
"feature-extraction",
"custom_code",
"dataset:laion/laion2B-en",
"dataset:kakaobrain/coyo-700m",
"arxiv:2402.04252",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2024-02-05T15:58:42Z | ---
license: apache-2.0
datasets:
- laion/laion2B-en
- kakaobrain/coyo-700m
---
<div align="center">
<h2><a href="https://arxiv.org/abs/2402.04252">EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters</a></h2>
[Quan Sun](https://github.com/Quan-Sun)<sup>1*</sup>, [Jinsheng Wang](https://github.com/Wolfwjs/)<sup>1*</sup>, [Qiying Yu](https://yqy2001.github.io)<sup>1,2*</sup>, [Yufeng Cui](https://scholar.google.com/citations?hl=en&user=5Ydha2EAAAAJ)<sup>1</sup>, [Fan Zhang](https://scholar.google.com/citations?user=VsJ39HMAAAAJ)<sup>1</sup>, [Xiaosong Zhang](https://zhangxiaosong18.github.io)<sup>1</sup>, [Xinlong Wang](https://www.xloong.wang/)<sup>1</sup>
<sup>1</sup> [BAAI](https://www.baai.ac.cn/english.html), <sup>2</sup> [THU](https://air.tsinghua.edu.cn) <br><sup>*</sup> equal contribution
[Paper](https://arxiv.org/abs/2402.04252) | [Github](https://github.com/baaivision/EVA/tree/master/EVA-CLIP-18B)
</div>
Scaling up contrastive language-image pretraining (CLIP) is critical for empowering both vision and multimodal models. We present EVA-CLIP-18B, the largest and most powerful open-source CLIP model to date, with 18-billion parameters. With only 6-billion training samples seen, EVA-CLIP-18B achieves an exceptional **80.7%** zero-shot top-1 accuracy averaged across 27 widely recognized image classification benchmarks, outperforming its forerunner EVA-CLIP (5-billion parameters) and other open-source CLIP models by a large margin. Remarkably, we observe a consistent performance improvement with the model size scaling of EVA-CLIP, despite maintaining a constant training dataset of 2-billion image-text pairs from LAION-2B and COYO-700M. This dataset is openly available and much smaller than the in-house datasets (e.g., DFN-5B, WebLI-10B) employed in other state-of-the-art CLIP models. EVA-CLIP-18B demonstrates the potential of EVA-style weak-to-strong visual model scaling. With our model weights made publicly available, we hope to facilitate future research in vision and multimodal foundation models.
**Table of Contents**
- [Summary of EVA-CLIP performance](#summary-of-eva-clip-performance)
- [Model Card](#model-card)
- [EVA-CLIP-8B](#eva-clip-8b)
- [EVA-CLIP-18B](#eva-clip-18b)
- [Usage](#usage)
- [BibTeX \& Citation](#bibtex--citation)
## Summary of EVA-CLIP performance

Scaling behavior of EVA-CLIP with zero-shot classification performance averaged across 27 image classification benchmarks, compared with the current state-of-the-art and largest CLIP models (224px). The diameter of each circle demonstrates the forward GFLOPs × the number of training samples seen. The performance of EVA-CLIP consistently improves as scaling up.
## Model Card
### EVA-8B
<div align="center">
| model name | total #params | seen samples | pytorch weight |
|:-----------|:------:|:------:|:------:|
| `EVA_8B_psz14` | 7.5B | 6B | [PT](https://huggingface.co/BAAI/EVA-CLIP-8B/resolve/main/EVA_8B_psz14.bin) (`31.0GB`) |
</div>
### EVA-CLIP-8B
> Image encoder MIM teacher: [EVA02_CLIP_E_psz14_plus_s9B](https://huggingface.co/QuanSun/EVA-CLIP/blob/main/EVA02_CLIP_E_psz14_s4B.pt).
<div align="center">
| model name | image enc. init. ckpt | text enc. init. ckpt | total #params | training data | training batch size | gpus for training | img. cls. avg. acc. | video cls. avg. acc. | retrieval MR | hf weight | pytorch weight |
|:-----|:-----|:-----------|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| `EVA-CLIP-8B` | `EVA_8B_psz14` | `EVA02_CLIP_E_psz14_plus_s9B` | 8.1B | Merged-2B | 178K | 384 A100(40GB) | **79.4** | **73.6** | **86.2**| [🤗 HF](https://huggingface.co/BAAI/EVA-CLIP-8B) | [PT](https://huggingface.co/BAAI/EVA-CLIP-8B/resolve/main/EVA_CLIP_8B_psz14_s9B.pt) (`32.9GB`)|
| `EVA-CLIP-8B-448` | `EVA-CLIP-8B` | `EVA-CLIP-8B` | 8.1B | Merged-2B | 24K | 384 A100(40GB) | **80.0** | **73.7** | **86.4** | [🤗 HF](https://huggingface.co/BAAI/EVA-CLIP-8B-448) | [PT](https://huggingface.co/BAAI/EVA-CLIP-8B-448/resolve/main/EVA_CLIP_8B_psz14_plus_s0.6B.pt) (`32.9GB`)|
</div>
### EVA-CLIP-18B
> Image encoder MIM teacher: [EVA02_CLIP_E_psz14_plus_s9B](https://huggingface.co/QuanSun/EVA-CLIP/blob/main/EVA02_CLIP_E_psz14_s4B.pt).
<div align="center">
| model name | image enc. init. ckpt | text enc. init. ckpt | total #params | training data | training batch size | gpus for training | img. cls. avg. acc. | video cls. avg. acc. | retrieval MR | hf weight | pytorch weight |
|:-----|:-----|:-----------|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| `EVA-CLIP-18B` | `EVA_18B_psz14` | `EVA02_CLIP_E_psz14_plus_s9B` | 18.1B | Merged-2B+ | 108K | 360 A100(40GB) | **80.7** | **75.0** | **87.8**| stay tuned | stay tuned |
</div>
- To construct Merged-2B, we merged 1.6 billion samples from [LAION-2B](https://laion.ai/blog/laion-5b/) dataset with 0.4 billion samples from [COYO-700M](https://github.com/kakaobrain/coyo-dataset).
- The Merged-2B+ consists of all samples from Merged-2B, along with 20 millions samples from [LAION-COCO](https://laion.ai/blog/laion-coco/) and 23 millions samples from Merged-video including [VideoCC](https://github.com/google-research-datasets/videoCC-data), [InternVid](https://huggingface.co/datasets/OpenGVLab/InternVid) and [WebVid-10M](https://maxbain.com/webvid-dataset/). Merged-video was added at the end of the training process.
**It's important to note that all results presented in the paper are evaluated using PyTorch weights. There may be differences in performance when using Hugging Face (hf) models.**
## Zero-Shot Evaluation
We use [CLIP-Benchmark](https://github.com/LAION-AI/CLIP_benchmark) to evaluate the zero-shot performance of EVA-CLIP models. Following [vissl](https://github.com/facebookresearch/vissl/blob/main/extra_scripts/datasets/create_k700_data_files.py), we evauate the zero-shot video classification using 1 middle frame. Further details regarding the evaluation datasets can be found in our paper, particularly in Table 11.
## Usage
### Huggingface Version
```python
from PIL import Image
from transformers import AutoModel, AutoConfig
from transformers import CLIPImageProcessor, pipeline, CLIPTokenizer
import torch
import torchvision.transforms as T
from torchvision.transforms import InterpolationMode
image_path = "CLIP.png"
model_name_or_path = "BAAI/EVA-CLIP-8B" # or /path/to/local/EVA-CLIP-8B
image_size = 448
# use image processor with conig
processor = CLIPImageProcessor(size={"shortest_edge":image_size}, do_center_crop=True, crop_size=image_size)
## you can also directly use the image processor by torchvision
## squash
# processor = T.Compose(
# [
# T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
# T.Resize((image_size, image_size), interpolation=InterpolationMode.BICUBIC),
# T.ToTensor(),
# T.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
# ]
# )
## shortest
## processor = T.Compose(
# [
# T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
# T.Resize(image_size, interpolation=InterpolationMode.BICUBIC),
# T.CenterCrop(image_size),
# T.ToTensor(),
# T.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))
# ]
# )
model = AutoModel.from_pretrained(
model_name_or_path,
torch_dtype=torch.float16,
trust_remote_code=True).to('cuda').eval()
image = Image.open(image_path)
captions = ["a diagram", "a dog", "a cat"]
tokenizer = CLIPTokenizer.from_pretrained(model_name_or_path)
input_ids = tokenizer(captions, return_tensors="pt", padding=True).input_ids.to('cuda')
input_pixels = processor(images=image, return_tensors="pt", padding=True).pixel_values.to('cuda')
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(input_pixels)
text_features = model.encode_text(input_ids)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
label_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print(f"Label probs: {label_probs}")
```
### Pytorch version
Go to [GitHub](https://github.com/baaivision/EVA/tree/master/EVA-CLIP-18B)
```python
import torch
from eva_clip import create_model_and_transforms, get_tokenizer
from PIL import Image
model_name = "EVA-CLIP-8B-plus"
pretrained = "eva_clip" # or "/path/to/EVA_CLIP_8B_psz14_plus_s0.6B.pt"
image_path = "CLIP.png"
caption = ["a diagram", "a dog", "a cat"]
device = "cuda" if torch.cuda.is_available() else "cpu"
model, _, processor = create_model_and_transforms(model_name, pretrained, force_custom_clip=True)
tokenizer = get_tokenizer(model_name)
model = model.to(device)
image = processor(Image.open(image_path)).unsqueeze(0).to(device)
text = tokenizer(["a diagram", "a dog", "a cat"]).to(device)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs)
```
You can leverage [deepspeed.zero.Init()](https://deepspeed.readthedocs.io/en/stable/zero3.html#constructing-massive-models) with deepspeed zero stage 3 if you have limited CPU memory. For loading a pretrained checkpoint in the context of using deepspeed.zero.Init(), it's advised to use the `load_zero_partitions()` function in `eva_clip/factory.py`.
## BibTeX & Citation
```
@article{EVA-CLIP-18B,
title={EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters},
author={Quan Sun and Jinsheng Wang and Qiying Yu and Yufeng Cui and Fan Zhang and Xiaosong Zhang and Xinlong Wang},
journal={arXiv preprint arXiv:2402.04252},
year={2023}
}
``` |
SparseLLM/swiglu-10B | SparseLLM | 2024-02-07T02:23:00Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T14:26:59Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-15B | SparseLLM | 2024-02-07T02:22:39Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T14:22:19Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-20B | SparseLLM | 2024-02-07T02:22:23Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T14:18:04Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-35B | SparseLLM | 2024-02-07T02:21:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T14:00:50Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-50B | SparseLLM | 2024-02-07T02:20:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T13:52:38Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-55B | SparseLLM | 2024-02-07T02:20:35Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T13:46:17Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-65B | SparseLLM | 2024-02-07T02:20:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-13T13:36:56Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-80B | SparseLLM | 2024-02-07T02:18:57Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"en",
"arxiv:2402.03804",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-13T13:08:15Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-5B | SparseLLM | 2024-02-07T02:17:02Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T07:15:10Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-25B | SparseLLM | 2024-02-07T02:16:49Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T07:31:21Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-20B | SparseLLM | 2024-02-07T02:16:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T07:26:23Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-45B | SparseLLM | 2024-02-07T02:15:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T07:41:09Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
hxgrace/model_6_20 | hxgrace | 2024-02-07T02:15:16Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-01-11T02:58:10Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-hxgrace/model_6_20
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning, based on the dataset found at [hxgrace/augmentedSketches](https://huggingface.co/datasets/hxgrace/augmentedSketches). It was trained with a batch size of 6 over 20 epochs.
|
hxgrace/model_2_20 | hxgrace | 2024-02-07T02:14:27Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-01-10T17:08:17Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-hxgrace/model20
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning, based on the dataset found at [hxgrace/augmentedSketches](https://huggingface.co/datasets/hxgrace/augmentedSketches?row=3). It was trained with a batch size of 2 over 20 epochs.
|
SparseLLM/relu2-60B | SparseLLM | 2024-02-07T02:12:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T07:53:42Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
ShinojiResearch/Senku-70B | ShinojiResearch | 2024-02-07T02:11:12Z | 3 | 10 | peft | [
"peft",
"llama",
"generated_from_trainer",
"base_model:152334H/miqu-1-70b-sf",
"base_model:adapter:152334H/miqu-1-70b-sf",
"license:cc0-1.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-02-06T13:02:23Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: 152334H/miqu-1-70b-sf
model-index:
- name: qlora-out
results: []
license: cc0-1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: 152334H/miqu-1-70b-sf
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: Open-Orca/SlimOrca
type: sharegpt
conversation: chatml
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# qlora-out
This model is a fine-tuned version of [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) on the Slimorca dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9043 | 0.0 | 1 | 0.6387 |
| 0.5612 | 0.25 | 881 | 0.3279 |
| 0.6044 | 0.5 | 1762 | 0.3177 |
| 0.6592 | 0.75 | 2643 | 0.3110 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0 |
SparseLLM/relu2-80B | SparseLLM | 2024-02-07T02:10:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T08:08:09Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-85B | SparseLLM | 2024-02-07T02:10:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T08:11:02Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-90B | SparseLLM | 2024-02-07T02:10:28Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T08:16:12Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-95B | SparseLLM | 2024-02-07T02:10:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T08:18:54Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-100B | SparseLLM | 2024-02-07T02:10:01Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T08:21:57Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-100B | SparseLLM | 2024-02-07T02:09:44Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-14T08:27:19Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
tsunemoto/Senku-70B-Full-GGUF | tsunemoto | 2024-02-07T02:09:38Z | 17 | 5 | null | [
"gguf",
"GGUF",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-07T01:19:40Z | ---
title: "Senku-70B-Full Quantized in GGUF"
tags:
- GGUF
language: en
---

# Tsunemoto GGUF's of Senku-70B-Full
This is a GGUF quantization of Senku-70B-Full.
[Q8 is available here](https://huggingface.co/ShinojiResearch/Senku-70B-Q8)
## Original Repo Link:
[Original Repository](https://huggingface.co/ShinojiResearch/Senku-70B-Full)
## Original Model Card:
---
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth.
EQ-Bench: 84.89
Will run more benches later.
|
SparseLLM/training-log | SparseLLM | 2024-02-07T02:08:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"en",
"arxiv:2402.03804",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-14T08:37:40Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-5B | SparseLLM | 2024-02-07T02:08:42Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T01:25:05Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-10B | SparseLLM | 2024-02-07T02:08:27Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T01:53:06Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-20B | SparseLLM | 2024-02-07T02:08:12Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T02:13:59Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-30B | SparseLLM | 2024-02-07T02:06:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T02:30:21Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-40B | SparseLLM | 2024-02-07T02:06:07Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T02:42:49Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-50B | SparseLLM | 2024-02-07T02:05:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T02:52:53Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu-60B | SparseLLM | 2024-02-07T02:05:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-15T03:12:08Z | ---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
Subsets and Splits