modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 12:29:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 12:24:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
azharsultan/Meta-Llama-3-8B-orpo | azharsultan | 2024-05-21T00:52:36Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:mlabonne/orpo-dpo-mix-40k",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T09:49:47Z | ---
library_name: transformers
license: apache-2.0
datasets:
- mlabonne/orpo-dpo-mix-40k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Azhar Sultan
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Llama-3-8B
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abdiharyadi/opus-mt-ft-4 | abdiharyadi | 2024-05-21T00:51:56Z | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-id",
"base_model:finetune:Helsinki-NLP/opus-mt-en-id",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-21T00:51:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: Helsinki-NLP/opus-mt-en-id
model-index:
- name: opus-mt-ft-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ft-4
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-id](https://huggingface.co/Helsinki-NLP/opus-mt-en-id) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2295
- eval_exact_match: 1.0
- eval_runtime: 24.6982
- eval_samples_per_second: 6.802
- eval_steps_per_second: 0.85
- epoch: 7.0
- step: 147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
reynaldhavard/xlm-roberta-base-finetuned-panx-all | reynaldhavard | 2024-05-21T00:48:44Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-21T00:33:35Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- F1: 0.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.299 | 1.0 | 835 | 0.2074 | 0.8078 |
| 0.1587 | 2.0 | 1670 | 0.1705 | 0.8461 |
| 0.1012 | 3.0 | 2505 | 0.1758 | 0.8558 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_5bpw_exl2 | Zoyd | 2024-05-21T00:48:30Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-21T00:11:26Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_5bpw_exl2)**</center> | <center>11195 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_0bpw_exl2)**</center> | <center>13193 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_5bpw_exl2)**</center> | <center>15187 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_75bpw_exl2)**</center> | <center>16186 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_0bpw_exl2)**</center> | <center>17183 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_25bpw_exl2)**</center> | <center>18179 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2)**</center> | <center>21171 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_0bpw_exl2)**</center> | <center>25231 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_5bpw_exl2)**</center> | <center>27111 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-8_0bpw_exl2)**</center> | <center>29540 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_0bpw_exl2 | Zoyd | 2024-05-21T00:48:23Z | 6 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T23:38:38Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_5bpw_exl2)**</center> | <center>11195 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_0bpw_exl2)**</center> | <center>13193 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_5bpw_exl2)**</center> | <center>15187 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_75bpw_exl2)**</center> | <center>16186 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_0bpw_exl2)**</center> | <center>17183 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_25bpw_exl2)**</center> | <center>18179 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2)**</center> | <center>21171 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_0bpw_exl2)**</center> | <center>25231 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_5bpw_exl2)**</center> | <center>27111 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-8_0bpw_exl2)**</center> | <center>29540 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2 | Zoyd | 2024-05-21T00:48:13Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T23:06:00Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_5bpw_exl2)**</center> | <center>11195 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_0bpw_exl2)**</center> | <center>13193 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_5bpw_exl2)**</center> | <center>15187 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_75bpw_exl2)**</center> | <center>16186 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_0bpw_exl2)**</center> | <center>17183 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_25bpw_exl2)**</center> | <center>18179 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2)**</center> | <center>21171 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_0bpw_exl2)**</center> | <center>25231 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_5bpw_exl2)**</center> | <center>27111 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-8_0bpw_exl2)**</center> | <center>29540 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_25bpw_exl2 | Zoyd | 2024-05-21T00:48:06Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T22:32:44Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_5bpw_exl2)**</center> | <center>11195 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_0bpw_exl2)**</center> | <center>13193 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_5bpw_exl2)**</center> | <center>15187 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_75bpw_exl2)**</center> | <center>16186 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_0bpw_exl2)**</center> | <center>17183 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_25bpw_exl2)**</center> | <center>18179 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2)**</center> | <center>21171 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_0bpw_exl2)**</center> | <center>25231 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_5bpw_exl2)**</center> | <center>27111 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-8_0bpw_exl2)**</center> | <center>29540 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_0bpw_exl2 | Zoyd | 2024-05-21T00:47:38Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T20:21:20Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_5bpw_exl2)**</center> | <center>11195 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_0bpw_exl2)**</center> | <center>13193 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_5bpw_exl2)**</center> | <center>15187 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_75bpw_exl2)**</center> | <center>16186 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_0bpw_exl2)**</center> | <center>17183 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_25bpw_exl2)**</center> | <center>18179 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2)**</center> | <center>21171 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_0bpw_exl2)**</center> | <center>25231 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_5bpw_exl2)**</center> | <center>27111 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-8_0bpw_exl2)**</center> | <center>29540 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_2bpw_exl2 | Zoyd | 2024-05-21T00:47:18Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-20T19:17:17Z | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_2bpw_exl2)**</center> | <center>10049 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-2_5bpw_exl2)**</center> | <center>11195 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_0bpw_exl2)**</center> | <center>13193 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_5bpw_exl2)**</center> | <center>15187 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-3_75bpw_exl2)**</center> | <center>16186 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_0bpw_exl2)**</center> | <center>17183 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-4_25bpw_exl2)**</center> | <center>18179 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-5_0bpw_exl2)**</center> | <center>21171 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_0bpw_exl2)**</center> | <center>25231 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-6_5bpw_exl2)**</center> | <center>27111 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-34B-Chat-16K-8_0bpw_exl2)**</center> | <center>29540 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
MaziyarPanahi/Shadowm7expMeliodaspercival_01_experiment26t3q-7B-GGUF | MaziyarPanahi | 2024-05-21T00:39:03Z | 85 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B",
"base_model:quantized:automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B"
] | text-generation | 2024-05-21T00:07:34Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Shadowm7expMeliodaspercival_01_experiment26t3q-7B-GGUF
base_model: automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Shadowm7expMeliodaspercival_01_experiment26t3q-7B-GGUF](https://huggingface.co/MaziyarPanahi/Shadowm7expMeliodaspercival_01_experiment26t3q-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B](https://huggingface.co/automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B)
## Description
[MaziyarPanahi/Shadowm7expMeliodaspercival_01_experiment26t3q-7B-GGUF](https://huggingface.co/MaziyarPanahi/Shadowm7expMeliodaspercival_01_experiment26t3q-7B-GGUF) contains GGUF format model files for [automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B](https://huggingface.co/automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
soumi-maiti/ParallelWaveGAN_VoxtLM | soumi-maiti | 2024-05-21T00:37:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-21T00:10:12Z | ---
license: apache-2.0
---
|
JianKim3293/llama3-KoEn-8B-Instruct-mergelaw | JianKim3293 | 2024-05-21T00:37:08Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-21T00:32:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DUAL-GPO-2/phi-2-irepo-chatml-v16-i1 | DUAL-GPO-2 | 2024-05-21T00:33:49Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-irepo-chatml-merged-i0",
"base_model:adapter:DUAL-GPO/phi-2-irepo-chatml-merged-i0",
"region:us"
] | null | 2024-05-20T23:49:20Z | ---
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO/phi-2-irepo-chatml-merged-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-irepo-chatml-v16-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-irepo-chatml-v16-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-irepo-chatml-merged-i0](https://huggingface.co/DUAL-GPO/phi-2-irepo-chatml-merged-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
reynaldhavard/xlm-roberta-base-finetuned-panx-en | reynaldhavard | 2024-05-21T00:33:32Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-21T00:31:11Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3905
- F1: 0.6861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0479 | 1.0 | 50 | 0.4854 | 0.5857 |
| 0.4604 | 2.0 | 100 | 0.3995 | 0.6605 |
| 0.3797 | 3.0 | 150 | 0.3905 | 0.6861 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LampOfSocrates/bert-cased-plodcw-sourav | LampOfSocrates | 2024-05-21T00:32:30Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-19T23:38:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HaitameLaframe/Llama-3-8B-StoryGenerator | HaitameLaframe | 2024-05-21T00:31:59Z | 47 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T23:46:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** HaitameLaf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Skoba777/Skoba | Skoba777 | 2024-05-21T00:22:53Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-21T00:20:15Z | ---
license: apache-2.0
---
|
reynaldhavard/xlm-roberta-base-finetuned-panx-de-fr | reynaldhavard | 2024-05-21T00:22:38Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-21T00:09:09Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 |
| 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 |
| 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
HaitameLaframe/Meta-Llama-3-8B-LoRA | HaitameLaframe | 2024-05-21T00:14:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T23:44:14Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** HaitameLaf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aztro/sdxl-maba | aztro | 2024-05-21T00:13:57Z | 4 | 0 | diffusers | [
"diffusers",
"autotrain",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"en",
"base_model:Yntec/DreamPhotoGASM",
"base_model:adapter:Yntec/DreamPhotoGASM",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-15T11:53:35Z | ---
tags:
- autotrain
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: Yntec/DreamPhotoGASM
instance_prompt: tessy
license: openrail++
language:
- en
---
# AutoTrain LoRA DreamBooth - ovieyra21/autotrain-begg7-ozit5
These are LoRA adaption weights for Yntec/DreamPhotoGASM. The weights were trained on tessy using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False. |
marua15/mistral_fine_tune | marua15 | 2024-05-21T00:08:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T22:47:33Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Habaznya/p_model | Habaznya | 2024-05-21T00:01:07Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-04T21:51:51Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: p_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# p_model
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3591
- Accuracy: 0.9066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7455 | 1.0 | 832 | 0.4333 | 0.8517 |
| 0.3497 | 2.0 | 1664 | 0.3821 | 0.8785 |
| 0.2924 | 3.0 | 2496 | 0.3402 | 0.8955 |
| 0.2017 | 4.0 | 3328 | 0.3417 | 0.9066 |
| 0.1476 | 5.0 | 4160 | 0.3591 | 0.9066 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
worldboss/idefics-9b-doodles-v1 | worldboss | 2024-05-20T23:56:21Z | 64 | 0 | transformers | [
"transformers",
"safetensors",
"idefics",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2024-05-20T23:29:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Prolux/FFire | Prolux | 2024-05-20T23:49:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T23:49:30Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Prolux
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
damgomz/ft_bs32_lr6_mlm | damgomz | 2024-05-20T23:47:25Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-20T20:39:31Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-21T01:47:18'
project_name: ft_bs32_lr6_mlm_emissions_tracker
run_id: 67fdf341-7766-43ef-b8f2-0ca8fd67de3e
duration: 14551.246319770811
emissions: 0.0088051864348589
emissions_rate: 6.051156197456002e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.1717852426302099
gpu_energy: 0
ram_energy: 0.015157421747595
energy_consumed: 0.1869426643778051
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 14551.246319770811 |
| Emissions (Co2eq in kg) | 0.0088051864348589 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.1717852426302099 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.015157421747595 |
| Consumed energy (kWh) | 0.1869426643778051 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.028011149165558812 |
| Emissions (Co2eq in kg) | 0.005699238141910234 |
## Note
20 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs16_lr5_MLM |
| model_name | ft_bs32_lr6_mlm |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32580 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.481592 | 0.361436 | 0.838733 | 0.880368 |
| 1 | 0.339060 | 0.335583 | 0.852725 | 0.881902 |
| 2 | 0.298181 | 0.333528 | 0.855670 | 0.881902 |
| 3 | 0.259961 | 0.338860 | 0.855670 | 0.875767 |
| 4 | 0.214407 | 0.369720 | 0.849043 | 0.904908 |
| 5 | 0.144635 | 0.418665 | 0.837261 | 0.815951 |
|
lleticiasilvaa/TinyLlama1B-spider-v3-25500steps | lleticiasilvaa | 2024-05-20T23:46:26Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-16T18:40:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Fine Tuning using Spider
Some passing the schema with examples and some, if no longer than 2048 tokens, with sql examples, like:
Below are some sample questions and their corresponding SQL queries that might help you answer the user's question:
[QUESTION]What is the number of cars with a greater accelerate than the one with the most horsepower?[/QUESTION]
[SQL]SELECT COUNT(*)
FROM CARS_DATA
WHERE Accelerate >
(SELECT Accelerate
FROM CARS_DATA
ORDER BY Horsepower DESC
LIMIT 1);[/SQL]
#<|ddl|>
The query will run on a database with the following schema:
CREATE TABLE car_makers (
"Id" INTEGER,
"Maker" TEXT,
"FullName" TEXT,
"Country" TEXT,
PRIMARY KEY ("Id"),
FOREIGN KEY("Country") REFERENCES countries ("CountryId")
)
/*
2 rows from car_makers table:
Id Maker FullName Country
1 amc American Motor Company 1
2 volkswagen Volkswagen 2
*/
Max sequence length: 2048
After filtering the dataset, there are 7665 rows remaining
Prompt:
```
"""# <|system|>
You are a chatbot tasked with responding to questions about an SQLite database.
Your responses must always consist of valid SQL code, and only that.
If you are unable to generate SQL for a question, respond with 'I do not know'.{sql_example}
# <|ddl|>
The query will run on a database with the following schema:
{ddl}
# <|user|>
{question}
# <|assistant|>
[SQL]"""
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lleticiasilvaa/TinyLlama1B-spider-v1-15epochs | lleticiasilvaa | 2024-05-20T23:38:41Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T12:56:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Fine Tuning using Spider (train_spider.join) without 'multiple SELECT' and 'multiple JOIN', so only with 'Easy' and 'Medium' complexity.
Max sequence length: 1024
After filtering the dataset, there are 3940 rows remaining:
* Train: 3546 rows
* Eval: 394 rows (Evaluated every 0.5 epoch => 3546/2 = 1773 steps)
Prompt:
```
# <|system|>
You are a chatbot tasked with responding to questions about an SQL database. Your responses must always consist of valid SQL code, and only that.
If you are unable to generate SQL for a question, respond with 'I do not know'.
# <|database schema|>
{schema}
# <|user|>
{question}
# <|assistant|>
[SQL]
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
qwp4w3hyb/Yi-1.5-9B-Chat-16K-iMat-GGUF | qwp4w3hyb | 2024-05-20T23:37:11Z | 49 | 0 | null | [
"gguf",
"yi",
"01-ai",
"instruct",
"finetune",
"chatml",
"imatrix",
"importance matrix",
"text-generation",
"arxiv:2403.04652",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:quantized:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-20T21:02:44Z | ---
license: apache-2.0
pipeline_tag: text-generation
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- yi
- 01-ai
- instruct
- finetune
- chatml
- gguf
- imatrix
- importance matrix
model-index:
- name: 01-ai/Yi-1.5-9B-Chat-16K-iMat-GGUF
results: []
---
# Quant Infos
- quants done with an importance matrix for improved quantization loss
- ggufs & imatrix generated from bf16 for "optimal" accuracy loss
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [fabf30b4c4fca32e116009527180c252919ca922](https://github.com/ggerganov/llama.cpp/commit/fabf30b4c4fca32e116009527180c252919ca922) (master as of 2024-05-20)
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
```
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
```
# Original Model Card:
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
DUAL-GPO-2/phi-2-irepo-chatml-v13-i1 | DUAL-GPO-2 | 2024-05-20T23:36:47Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-irepo-chatml-merged-i0",
"base_model:adapter:DUAL-GPO/phi-2-irepo-chatml-merged-i0",
"region:us"
] | null | 2024-05-20T21:25:10Z | ---
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO/phi-2-irepo-chatml-merged-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-irepo-chatml-v13-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-irepo-chatml-v13-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-irepo-chatml-merged-i0](https://huggingface.co/DUAL-GPO/phi-2-irepo-chatml-merged-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
baek26/all_9929_bart-all_rl | baek26 | 2024-05-20T23:29:57Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2024-05-20T23:29:16Z | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="baek26//tmp/tmpew_kovwt/baek26/all_9929_bart-all_rl")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmpew_kovwt/baek26/all_9929_bart-all_rl")
model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmpew_kovwt/baek26/all_9929_bart-all_rl")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
kadirnar/lora-trained-xl | kadirnar | 2024-05-20T23:27:25Z | 30 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:SG161222/Realistic_Vision_V6.0_B1_noVAE",
"base_model:finetune:SG161222/Realistic_Vision_V6.0_B1_noVAE",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-20T21:25:51Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: SG161222/Realistic_Vision_V6.0_B1_noVAE
inference: true
instance_prompt: a photo of try_on a model wearing
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - kadirnar/lora-trained-xl
This is a dreambooth model derived from SG161222/Realistic_Vision_V6.0_B1_noVAE. The weights were trained on a photo of try_on a model wearing using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
damgomz/ft_bs16_lr6_mlm | damgomz | 2024-05-20T23:24:45Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-20T20:26:35Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-21T01:24:30'
project_name: ft_bs16_lr6_mlm_emissions_tracker
run_id: 0ddfd27b-4fbf-416a-aefd-5bc661eddbd4
duration: 13016.97026515007
emissions: 0.0078767721168084
emissions_rate: 6.051156264754384e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.1536722855205337
gpu_energy: 0
ram_energy: 0.0135592407062649
energy_consumed: 0.1672315262267985
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 13016.97026515007 |
| Emissions (Co2eq in kg) | 0.0078767721168084 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.1536722855205337 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0135592407062649 |
| Consumed energy (kWh) | 0.1672315262267985 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.025057667760413883 |
| Emissions (Co2eq in kg) | 0.005098313353850444 |
## Note
20 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs16_lr5_MLM |
| model_name | ft_bs16_lr6_mlm |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32580 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.441067 | 0.342969 | 0.849779 | 0.877301 |
| 1 | 0.324801 | 0.346633 | 0.851252 | 0.918712 |
| 2 | 0.278107 | 0.331949 | 0.857143 | 0.871166 |
| 3 | 0.216568 | 0.365292 | 0.849779 | 0.878834 |
| 4 | 0.141970 | 0.435924 | 0.844624 | 0.915644 |
| 5 | 0.062279 | 0.516091 | 0.843152 | 0.889571 |
|
beam-searchers/curry-dpo-easy-llama-lora-model | beam-searchers | 2024-05-20T23:24:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T14:29:22Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Anish13/junk | Anish13 | 2024-05-20T23:24:20Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T22:52:33Z | ---
tags:
- generated_from_trainer
model-index:
- name: junk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# junk
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.1252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 30
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.42 | 1.25 | 5 | 10.1940 |
| 10.1087 | 2.5 | 10 | 9.7539 |
| 9.7572 | 3.75 | 15 | 9.4707 |
| 9.5321 | 5.0 | 20 | 9.2852 |
| 9.13 | 6.25 | 25 | 9.1155 |
| 8.9989 | 7.5 | 30 | 8.9138 |
| 8.7422 | 8.75 | 35 | 8.7181 |
| 8.5133 | 10.0 | 40 | 8.5220 |
| 8.0836 | 11.25 | 45 | 8.3687 |
| 7.8212 | 12.5 | 50 | 8.2344 |
| 7.6616 | 13.75 | 55 | 8.1437 |
| 7.4743 | 15.0 | 60 | 8.0750 |
| 7.1668 | 16.25 | 65 | 8.0275 |
| 7.0485 | 17.5 | 70 | 7.9937 |
| 6.9619 | 18.75 | 75 | 7.9525 |
| 6.8705 | 20.0 | 80 | 7.9584 |
| 6.6232 | 21.25 | 85 | 7.9238 |
| 6.6423 | 22.5 | 90 | 7.9155 |
| 6.5876 | 23.75 | 95 | 7.9088 |
| 6.5075 | 25.0 | 100 | 7.9154 |
| 6.4218 | 26.25 | 105 | 7.8957 |
| 6.2857 | 27.5 | 110 | 7.9040 |
| 6.1833 | 28.75 | 115 | 7.9092 |
| 6.1263 | 30.0 | 120 | 7.9198 |
| 6.0123 | 31.25 | 125 | 7.9103 |
| 5.9111 | 32.5 | 130 | 7.9150 |
| 5.9157 | 33.75 | 135 | 7.9178 |
| 5.8237 | 35.0 | 140 | 7.9479 |
| 5.6626 | 36.25 | 145 | 7.9358 |
| 5.657 | 37.5 | 150 | 7.9548 |
| 5.5894 | 38.75 | 155 | 7.9572 |
| 5.5157 | 40.0 | 160 | 7.9800 |
| 5.4606 | 41.25 | 165 | 7.9481 |
| 5.2962 | 42.5 | 170 | 7.9568 |
| 5.2877 | 43.75 | 175 | 7.9720 |
| 5.2395 | 45.0 | 180 | 7.9709 |
| 5.1394 | 46.25 | 185 | 7.9900 |
| 5.0096 | 47.5 | 190 | 8.0010 |
| 4.9646 | 48.75 | 195 | 8.0105 |
| 4.973 | 50.0 | 200 | 8.0182 |
| 4.866 | 51.25 | 205 | 8.0310 |
| 4.8044 | 52.5 | 210 | 8.0372 |
| 4.7804 | 53.75 | 215 | 8.0387 |
| 4.7187 | 55.0 | 220 | 8.0166 |
| 4.6399 | 56.25 | 225 | 8.0598 |
| 4.6644 | 57.5 | 230 | 8.0465 |
| 4.5318 | 58.75 | 235 | 8.0482 |
| 4.4451 | 60.0 | 240 | 8.0538 |
| 4.4442 | 61.25 | 245 | 8.0473 |
| 4.3778 | 62.5 | 250 | 8.0517 |
| 4.4453 | 63.75 | 255 | 8.0740 |
| 4.3813 | 65.0 | 260 | 8.0658 |
| 4.2654 | 66.25 | 265 | 8.0764 |
| 4.2278 | 67.5 | 270 | 8.0737 |
| 4.2212 | 68.75 | 275 | 8.0952 |
| 4.1481 | 70.0 | 280 | 8.0877 |
| 4.162 | 71.25 | 285 | 8.0882 |
| 4.077 | 72.5 | 290 | 8.0813 |
| 4.0134 | 73.75 | 295 | 8.0862 |
| 3.9975 | 75.0 | 300 | 8.0980 |
| 3.9174 | 76.25 | 305 | 8.0989 |
| 3.9748 | 77.5 | 310 | 8.0903 |
| 3.9362 | 78.75 | 315 | 8.1109 |
| 3.8585 | 80.0 | 320 | 8.1049 |
| 3.8832 | 81.25 | 325 | 8.1076 |
| 3.8799 | 82.5 | 330 | 8.1078 |
| 3.8354 | 83.75 | 335 | 8.1073 |
| 3.8073 | 85.0 | 340 | 8.1182 |
| 3.8701 | 86.25 | 345 | 8.1179 |
| 3.7696 | 87.5 | 350 | 8.1204 |
| 3.7907 | 88.75 | 355 | 8.1187 |
| 3.7428 | 90.0 | 360 | 8.1172 |
| 3.7048 | 91.25 | 365 | 8.1201 |
| 3.724 | 92.5 | 370 | 8.1205 |
| 3.7308 | 93.75 | 375 | 8.1191 |
| 3.7665 | 95.0 | 380 | 8.1211 |
| 3.6804 | 96.25 | 385 | 8.1244 |
| 3.6001 | 97.5 | 390 | 8.1220 |
| 3.6411 | 98.75 | 395 | 8.1245 |
| 3.6321 | 100.0 | 400 | 8.1252 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
MaziyarPanahi/Experiment26Mergerix-7B-GGUF | MaziyarPanahi | 2024-05-20T23:17:30Z | 44 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Experiment26Mergerix-7B",
"base_model:quantized:automerger/Experiment26Mergerix-7B"
] | text-generation | 2024-05-20T22:47:40Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Experiment26Mergerix-7B-GGUF
base_model: automerger/Experiment26Mergerix-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Experiment26Mergerix-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26Mergerix-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Experiment26Mergerix-7B](https://huggingface.co/automerger/Experiment26Mergerix-7B)
## Description
[MaziyarPanahi/Experiment26Mergerix-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26Mergerix-7B-GGUF) contains GGUF format model files for [automerger/Experiment26Mergerix-7B](https://huggingface.co/automerger/Experiment26Mergerix-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
ZInnatRia/Model_Deploy | ZInnatRia | 2024-05-20T23:14:13Z | 182 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T23:13:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
damgomz/ft_bs16_lr6 | damgomz | 2024-05-20T23:12:27Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-20T20:17:33Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-21T01:11:56'
project_name: ft_bs16_lr6_emissions_tracker
run_id: de899d70-200b-4886-a631-f7b1208dff83
duration: 12794.296389102936
emissions: 0.0077420295479005
emissions_rate: 6.0511569471648e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.1510435175374149
gpu_energy: 0
ram_energy: 0.0133272930165131
energy_consumed: 0.1643708105539282
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10.0
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 12794.296389102936 |
| Emissions (Co2eq in kg) | 0.0077420295479005 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.1510435175374149 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0133272930165131 |
| Consumed energy (kWh) | 0.1643708105539282 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.024629020549023148 |
| Emissions (Co2eq in kg) | 0.005011099419065316 |
## Note
20 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs32_lr5 |
| model_name | ft_bs16_lr6 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32580 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.470165 | 0.347914 | 0.852725 | 0.901840 |
| 1 | 0.327644 | 0.400890 | 0.831370 | 0.950920 |
| 2 | 0.270167 | 0.349906 | 0.852725 | 0.877301 |
| 3 | 0.212724 | 0.386609 | 0.831370 | 0.837423 |
| 4 | 0.132359 | 0.448329 | 0.832106 | 0.898773 |
| 5 | 0.074659 | 0.475184 | 0.844624 | 0.822086 |
|
BecarIA/Longformer-SQuAD-becas-2 | BecarIA | 2024-05-20T23:07:46Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-20T22:15:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aimeeli/dummy | aimeeli | 2024-05-20T23:04:58Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T23:04:57Z | ---
license: apache-2.0
---
|
PHVK1611/swin-tiny-patch4-window7-224-finetuned-eurosat | PHVK1611 | 2024-05-20T23:03:59Z | 219 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-04-09T09:59:40Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.959752766997269
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1659
- Accuracy: 0.9598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6782 | 1.0 | 31306 | 0.3116 | 0.9094 |
| 0.2155 | 2.0 | 62612 | 0.2301 | 0.9399 |
| 0.1997 | 3.0 | 93918 | 0.1659 | 0.9598 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
bartowski/Yi-1.5-34B-Chat-16K-GGUF | bartowski | 2024-05-20T22:57:42Z | 247 | 6 | null | [
"gguf",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-05-20T21:08:32Z | ---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Yi-1.5-34B-Chat-16K
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2940">b2940</a> for quantization.
Original model: https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a)
## Prompt format
```
{system_prompt}<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
<|im_end|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Yi-1.5-34B-Chat-16K-Q8_0.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q8_0.gguf) | Q8_0 | 36.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Yi-1.5-34B-Chat-16K-Q6_K.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q6_K.gguf) | Q6_K | 28.21GB | Very high quality, near perfect, *recommended*. |
| [Yi-1.5-34B-Chat-16K-Q5_K_M.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q5_K_M.gguf) | Q5_K_M | 24.32GB | High quality, *recommended*. |
| [Yi-1.5-34B-Chat-16K-Q5_K_S.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q5_K_S.gguf) | Q5_K_S | 23.70GB | High quality, *recommended*. |
| [Yi-1.5-34B-Chat-16K-Q4_K_M.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q4_K_M.gguf) | Q4_K_M | 20.65GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Yi-1.5-34B-Chat-16K-Q4_K_S.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q4_K_S.gguf) | Q4_K_S | 19.59GB | Slightly lower quality with more space savings, *recommended*. |
| [Yi-1.5-34B-Chat-16K-IQ4_NL.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ4_NL.gguf) | IQ4_NL | 19.52GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Yi-1.5-34B-Chat-16K-IQ4_XS.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ4_XS.gguf) | IQ4_XS | 18.47GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Yi-1.5-34B-Chat-16K-Q3_K_L.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q3_K_L.gguf) | Q3_K_L | 18.13GB | Lower quality but usable, good for low RAM availability. |
| [Yi-1.5-34B-Chat-16K-Q3_K_M.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q3_K_M.gguf) | Q3_K_M | 16.65GB | Even lower quality. |
| [Yi-1.5-34B-Chat-16K-IQ3_M.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ3_M.gguf) | IQ3_M | 15.56GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Yi-1.5-34B-Chat-16K-IQ3_S.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ3_S.gguf) | IQ3_S | 15.01GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Yi-1.5-34B-Chat-16K-Q3_K_S.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q3_K_S.gguf) | Q3_K_S | 14.96GB | Low quality, not recommended. |
| [Yi-1.5-34B-Chat-16K-IQ3_XS.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ3_XS.gguf) | IQ3_XS | 14.23GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Yi-1.5-34B-Chat-16K-IQ3_XXS.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ3_XXS.gguf) | IQ3_XXS | 13.33GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Yi-1.5-34B-Chat-16K-Q2_K.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-Q2_K.gguf) | Q2_K | 12.82GB | Very low quality but surprisingly usable. |
| [Yi-1.5-34B-Chat-16K-IQ2_M.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ2_M.gguf) | IQ2_M | 11.79GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Yi-1.5-34B-Chat-16K-IQ2_S.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ2_S.gguf) | IQ2_S | 10.89GB | Very low quality, uses SOTA techniques to be usable. |
| [Yi-1.5-34B-Chat-16K-IQ2_XS.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ2_XS.gguf) | IQ2_XS | 10.30GB | Very low quality, uses SOTA techniques to be usable. |
| [Yi-1.5-34B-Chat-16K-IQ2_XXS.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ2_XXS.gguf) | IQ2_XXS | 9.30GB | Lower quality, uses SOTA techniques to be usable. |
| [Yi-1.5-34B-Chat-16K-IQ1_M.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ1_M.gguf) | IQ1_M | 8.17GB | Extremely low quality, *not* recommended. |
| [Yi-1.5-34B-Chat-16K-IQ1_S.gguf](https://huggingface.co/bartowski/Yi-1.5-34B-Chat-16K-GGUF/blob/main/Yi-1.5-34B-Chat-16K-IQ1_S.gguf) | IQ1_S | 7.49GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Yi-1.5-34B-Chat-16K-GGUF --include "Yi-1.5-34B-Chat-16K-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Yi-1.5-34B-Chat-16K-GGUF --include "Yi-1.5-34B-Chat-16K-Q8_0.gguf/*" --local-dir Yi-1.5-34B-Chat-16K-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Yi-1.5-34B-Chat-16K-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
stanrom/ShareGPT4V-7B | stanrom | 2024-05-20T22:57:24Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"InternLMXComposer",
"feature-extraction",
"image-text-to-text",
"custom_code",
"arxiv:2311.12793",
"region:us"
] | image-text-to-text | 2024-05-20T20:25:52Z | ---
inference: false
pipeline_tag: image-text-to-text
---
<br>
<br>
# ShareGPT4V-7B Model Card
## Model details
**Model type:**
ShareGPT4V-7B is an open-source chatbot trained by fine-tuning CLP vision tower and LLaMA/Vicuna on GPT4-Vision-assisted [ShareGPT4V](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) data and LLaVA instruction-tuning data.
**Model date:**
ShareGPT4V-7B was trained in Nov 2023.
**Paper or resources for more information:**
[[Project](https://ShareGPT4V.github.io/)] [[Paper](https://huggingface.co/papers/2311.12793)] [[Code](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V)]
## Usage
You can directly utilize this model as we provide in our [[repository](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V)]. Moreover, you can modify the architecture name from "Share4VLlamaForCausalLM" to "LLaVALlamaForCausalLM" and the model_type keyword from "share4v" to "llava" in our config file and seamlessly load our model in the [[LLaVA repository](https://github.com/haotian-liu/LLaVA)].
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
## Intended use
**Primary intended uses:**
The primary use of ShareGPT4V-7B is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 1.2M high-quality image-text pairs, i.e., ShareGPT4V-PT data
- 100K GPT4-Vision-generated image-text pairs
- LLaVA instruction-tuning data
## Evaluation dataset
A collection of 11 benchmarks |
mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF | mradermacher | 2024-05-20T22:56:18Z | 158 | 8 | transformers | [
"transformers",
"gguf",
"en",
"dataset:aqua_rat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:quantized:abacusai/Smaug-Llama-3-70B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-19T14:29:22Z | ---
base_model: abacusai/Smaug-Llama-3-70B-Instruct
datasets:
- aqua_rat
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Deep-Miqu-103B-GGUF | mradermacher | 2024-05-20T22:55:44Z | 21 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T08:05:00Z | ---
base_model: jukofyork/Deep-Miqu-103B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jukofyork/Deep-Miqu-103B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q2_K.gguf) | Q2_K | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.IQ3_XS.gguf) | IQ3_XS | 42.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q3_K_S.gguf) | Q3_K_S | 44.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.IQ3_S.gguf) | IQ3_S | 44.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.IQ3_M.gguf) | IQ3_M | 46.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q3_K_M.gguf) | Q3_K_M | 49.7 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q3_K_L.gguf.part2of2) | Q3_K_L | 54.2 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.IQ4_XS.gguf.part2of2) | IQ4_XS | 55.7 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q4_K_S.gguf.part2of2) | Q4_K_S | 58.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q4_K_M.gguf.part2of2) | Q4_K_M | 62.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q5_K_S.gguf.part2of2) | Q5_K_S | 71.1 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q5_K_M.gguf.part2of2) | Q5_K_M | 73.0 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q6_K.gguf.part2of2) | Q6_K | 84.8 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF/resolve/main/Deep-Miqu-103B.Q8_0.gguf.part3of3) | Q8_0 | 109.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mtarasovic/rel-rent-sk-spacy | mtarasovic | 2024-05-20T22:54:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T22:52:39Z | ---
license: apache-2.0
---
|
mtarasovic/ner-rel-rent-sk-spacy | mtarasovic | 2024-05-20T22:52:18Z | 0 | 0 | spacy | [
"spacy",
"legal",
"sk",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T22:35:29Z | ---
license: apache-2.0
language:
- sk
library_name: spacy
tags:
- legal
--- |
maneln/tinyllama-qa | maneln | 2024-05-20T22:51:36Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T22:43:50Z | ---
license: apache-2.0
---
|
matthieuzone/VACHERINbis | matthieuzone | 2024-05-20T22:49:07Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T22:40:59Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/VACHERINbis
<Gallery />
## Model description
These are matthieuzone/VACHERINbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/VACHERINbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
fzzhang/mistralv1_spectral_r8_4e5_e05_merged | fzzhang | 2024-05-20T22:47:19Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T22:44:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fzzhang/mistralv1_spectral_r8_35e5_e05 | fzzhang | 2024-05-20T22:41:48Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T22:41:45Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistralv1_spectral_r8_35e5_e05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_spectral_r8_35e5_e05
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2 |
matthieuzone/TETE_DE_MOINESbis | matthieuzone | 2024-05-20T22:40:44Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T22:32:34Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/TETE_DE_MOINESbis
<Gallery />
## Model description
These are matthieuzone/TETE_DE_MOINESbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/TETE_DE_MOINESbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
EthanRhys/18-Volt-Current | EthanRhys | 2024-05-20T22:36:12Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2024-05-20T22:34:21Z | ---
license: openrail++
---
|
fzzhang/mistralv1_spectral_r8_3e5_e05_merged | fzzhang | 2024-05-20T22:34:23Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T22:31:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jinyuan22/RFamLlama-large | jinyuan22 | 2024-05-20T22:34:13Z | 166 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"biology",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T15:44:17Z | ---
license: cc-by-nc-4.0
library_name: transformers
tags:
- biology
pipeline_tag: text-generation
widget:
- text: <|bos|> <|tag_start|> 00050 <|tag_end|> <|5|>
---
# **RFamLlama**
The ability to efficiently generate specific RNA sequences on demand has significant implications for both scientific research and therapeutic applications. In this context, we introduce RFamLlama, a conditional language model that is specifically optimized for generating RNA sequences across diverse families. This model was trained on RNA sequences representing over 4,000 distinct families, each augmented with control tags to denote the specific family. We have shown that the inclusion of family-specific tags substantially enhances the capabilities of our model in zero-shot fitness prediction of RNA molecules. Additionally, this model supports a conditional generation approach, allowing for the generation of RNA sequences by using Rfam IDs as input prompts, thereby eliminating the need for further functional-specific fine-tuning. Consequently, RFamLlama is poised to be an effective and widely applicable tool for the zero-shot fitness prediction and generation of RNA sequences, potentially pushing the boundaries of what can be achieved beyond natural evolutionary processes.
## Use RFamLlama-base
```python
# generation
from transformers import LlamaForCausalLM, AutoTokenizer, pipeline
import torch
import sys
model_url = "jinyuan22/RFamLlama-large"
model = LlamaForCausalLM.from_pretrained(model_url, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_url)
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
pipe = pipeline("text-generation", model=model, device=device, tokenizer=tokenizer)
tag = "RF00005"
txt = f"<|bos|> <|tag_start|> {tag[2:]} <|tag_end|> <|5|> "
all_outputs = []
outputs = pipe(txt, num_return_sequences=10, max_new_tokens=300, repetition_penalty=1, top_p=1,temperature=1, do_sample=True)
for i, output in enumerate(outputs):
seq = output["generated_text"]
seq = seq.split("<|5|>")[1].split("<|3|>")[0]
print(f">{i}\n{seq}")
``` |
bgspaditya/malurl-roberta-5000-95 | bgspaditya | 2024-05-20T22:33:01Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T22:28:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
damgomz/ft_bs64_lr6_mlm | damgomz | 2024-05-20T22:29:55Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-20T20:07:43Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-21T00:23:19'
project_name: ft_bs64_lr6_mlm_emissions_tracker
run_id: 744170a9-7abe-403e-97c9-b1d9cb1ff01e
duration: 9463.653764724731
emissions: 0.006190923917876
emissions_rate: 6.541790382222564e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 7.5
cpu_energy: 0.1117235191464424
gpu_energy: 0
ram_energy: 0.019715811608235
energy_consumed: 0.1314393307546774
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 3
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 20
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 9463.653764724731 |
| Emissions (Co2eq in kg) | 0.006190923917876 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 7.5 |
| CPU energy (kWh) | 0.1117235191464424 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.019715811608235 |
| Consumed energy (kWh) | 0.1314393307546774 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 3 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.01821753349709511 |
| Emissions (Co2eq in kg) | 0.003706597724517186 |
## Note
20 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs16_lr5_MLM |
| model_name | ft_bs64_lr6_mlm |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-06 |
| batch_size | 64 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32580 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.540673 | 0.431631 | 0.799705 | 0.845092 |
| 1 | 0.384934 | 0.352200 | 0.850515 | 0.861963 |
| 2 | 0.322558 | 0.338670 | 0.850515 | 0.868098 |
| 3 | 0.295680 | 0.333823 | 0.849779 | 0.863497 |
| 4 | 0.262446 | 0.344478 | 0.848306 | 0.858896 |
| 5 | 0.221483 | 0.376880 | 0.849779 | 0.895706 |
|
macundin/entrega3 | macundin | 2024-05-20T22:26:21Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-05-20T22:26:14Z | ---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
ssmits/Falcon2-5.5B-Polish-GGUF | ssmits | 2024-05-20T22:26:13Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"lazymergekit",
"tiiuae/falcon-11B",
"llama-cpp",
"gguf-my-repo",
"pl",
"base_model:tiiuae/falcon-11B",
"base_model:quantized:tiiuae/falcon-11B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T22:07:26Z | ---
language:
- pl
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
- tiiuae/falcon-11B
- llama-cpp
- gguf-my-repo
base_model:
- tiiuae/falcon-11B
---
# ssmits/Falcon2-5.5B-Polish-Q5_K_M-GGUF
This model was converted to GGUF format from [`ssmits/Falcon2-5.5B-Polish`](https://huggingface.co/ssmits/Falcon2-5.5B-Polish) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ssmits/Falcon2-5.5B-Polish) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ssmits/Falcon2-5.5B-Polish-Q5_K_M-GGUF --model falcon2-5.5b-polish.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ssmits/Falcon2-5.5B-Polish-Q5_K_M-GGUF --model falcon2-5.5b-polish.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m falcon2-5.5b-polish.Q5_K_M.gguf -n 128
```
|
tali1/LLMNIDS-t5base-1 | tali1 | 2024-05-20T22:26:10Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"dataset:LLMNIDS-t5base-1/autotrain-data",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-20T22:25:18Z |
---
tags:
- autotrain
- text2text-generation
widget:
- text: "I love AutoTrain"
datasets:
- LLMNIDS-t5base-1/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Seq2Seq
## Validation Metrics
loss: 0.020906077697873116
rouge1: 98.3836
rouge2: 44.4266
rougeL: 98.3836
rougeLsum: 98.3909
gen_len: 4.5541
runtime: 55.669
samples_per_second: 122.797
steps_per_second: 7.688
: 3.0
|
fzzhang/mistralv1_spectral_r8_3e5_e05 | fzzhang | 2024-05-20T22:24:20Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T22:24:18Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistralv1_spectral_r8_3e5_e05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_spectral_r8_3e5_e05
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2 |
ClaudioItaly/Evolver-Q5_K_M-GGUF | ClaudioItaly | 2024-05-20T22:24:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"base_model:merge:mergekit-community/mergekit-slerp-ebgdloh",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T22:24:02Z | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mergekit-community/mergekit-slerp-ebgdloh
---
# ClaudioItaly/Evolver-Q5_K_M-GGUF
This model was converted to GGUF format from [`mergekit-community/Evolver`](https://huggingface.co/mergekit-community/Evolver) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mergekit-community/Evolver) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ClaudioItaly/Evolver-Q5_K_M-GGUF --model evolver.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ClaudioItaly/Evolver-Q5_K_M-GGUF --model evolver.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m evolver.Q5_K_M.gguf -n 128
```
|
rafaelsandroni/LoRa-RAG-pt-br-gemma-7b | rafaelsandroni | 2024-05-20T22:23:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T22:22:53Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** rafaelsandroni
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MaziyarPanahi/Experiment26Percival_01-7B-GGUF | MaziyarPanahi | 2024-05-20T22:22:27Z | 84 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Experiment26Percival_01-7B",
"base_model:quantized:automerger/Experiment26Percival_01-7B"
] | text-generation | 2024-05-20T21:52:14Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Experiment26Percival_01-7B-GGUF
base_model: automerger/Experiment26Percival_01-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Experiment26Percival_01-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26Percival_01-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Experiment26Percival_01-7B](https://huggingface.co/automerger/Experiment26Percival_01-7B)
## Description
[MaziyarPanahi/Experiment26Percival_01-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26Percival_01-7B-GGUF) contains GGUF format model files for [automerger/Experiment26Percival_01-7B](https://huggingface.co/automerger/Experiment26Percival_01-7B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
diegozambrana/BV_symbols_model | diegozambrana | 2024-05-20T22:19:27Z | 217 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-04-26T12:44:40Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: BV_symbols_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9423191870890616
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BV_symbols_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4636
- Accuracy: 0.9423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.9683 | 0.9988 | 209 | 0.9087 | 0.9259 |
| 0.5438 | 1.9976 | 418 | 0.5415 | 0.9381 |
| 0.4768 | 2.9964 | 627 | 0.4636 | 0.9423 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
GenTrendGPT/TypeGEN | GenTrendGPT | 2024-05-20T22:13:51Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T22:13:00Z | ---
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
tags:
- mergekit
- merge
---
# TypeGEN
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 20]
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- sources:
- layer_range: [0, 20]
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
```
|
yifanxie/jasper-cicada-1-1 | yifanxie | 2024-05-20T22:02:57Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-20T22:00:30Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="yifanxie/jasper-cicada-1-1",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<eos><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"yifanxie/jasper-cicada-1-1",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"yifanxie/jasper-cicada-1-1",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yifanxie/jasper-cicada-1-1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<eos><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 256
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GemmaForCausalLM(
(model): GemmaModel(
(embed_tokens): Embedding(256000, 2048, padding_idx=0)
(layers): ModuleList(
(0-17): 18 x GemmaDecoderLayer(
(self_attn): GemmaSdpaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=256, bias=False)
(v_proj): Linear(in_features=2048, out_features=256, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(rotary_emb): GemmaRotaryEmbedding()
)
(mlp): GemmaMLP(
(gate_proj): Linear(in_features=2048, out_features=16384, bias=False)
(up_proj): Linear(in_features=2048, out_features=16384, bias=False)
(down_proj): Linear(in_features=16384, out_features=2048, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): GemmaRMSNorm()
(post_attention_layernorm): GemmaRMSNorm()
)
)
(norm): GemmaRMSNorm()
)
(lm_head): Linear(in_features=2048, out_features=256000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
matthieuzone/SAINT-_FELICIENbis | matthieuzone | 2024-05-20T21:58:30Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T21:50:22Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/SAINT-_FELICIENbis
<Gallery />
## Model description
These are matthieuzone/SAINT-_FELICIENbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/SAINT-_FELICIENbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
ghost613/phi3_on_korean_events | ghost613 | 2024-05-20T21:57:55Z | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-20T21:57:39Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/Phi-3-mini-4k-instruct
model-index:
- name: phi3_on_korean_events
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi3_on_korean_events
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 760
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2308 | 0.26 | 20 | 1.1259 |
| 1.0422 | 0.53 | 40 | 0.9463 |
| 0.8644 | 0.79 | 60 | 0.7820 |
| 0.7426 | 1.05 | 80 | 0.7199 |
| 0.6894 | 1.32 | 100 | 0.6800 |
| 0.6671 | 1.58 | 120 | 0.6521 |
| 0.6397 | 1.84 | 140 | 0.6290 |
| 0.6018 | 2.11 | 160 | 0.6115 |
| 0.5723 | 2.37 | 180 | 0.5936 |
| 0.5555 | 2.63 | 200 | 0.5810 |
| 0.5455 | 2.89 | 220 | 0.5682 |
| 0.5289 | 3.16 | 240 | 0.5612 |
| 0.4929 | 3.42 | 260 | 0.5518 |
| 0.4966 | 3.68 | 280 | 0.5404 |
| 0.4831 | 3.95 | 300 | 0.5346 |
| 0.4598 | 4.21 | 320 | 0.5293 |
| 0.4462 | 4.47 | 340 | 0.5227 |
| 0.4404 | 4.74 | 360 | 0.5176 |
| 0.433 | 5.0 | 380 | 0.5079 |
| 0.4036 | 5.26 | 400 | 0.5146 |
| 0.3933 | 5.53 | 420 | 0.5070 |
| 0.393 | 5.79 | 440 | 0.5027 |
| 0.3912 | 6.05 | 460 | 0.5023 |
| 0.3589 | 6.32 | 480 | 0.5031 |
| 0.3714 | 6.58 | 500 | 0.5018 |
| 0.362 | 6.84 | 520 | 0.4935 |
| 0.3474 | 7.11 | 540 | 0.5052 |
| 0.3355 | 7.37 | 560 | 0.4990 |
| 0.3333 | 7.63 | 580 | 0.4992 |
| 0.3436 | 7.89 | 600 | 0.4974 |
| 0.3175 | 8.16 | 620 | 0.5024 |
| 0.321 | 8.42 | 640 | 0.5020 |
| 0.3065 | 8.68 | 660 | 0.5023 |
| 0.3139 | 8.95 | 680 | 0.5019 |
| 0.313 | 9.21 | 700 | 0.5071 |
| 0.3011 | 9.47 | 720 | 0.5045 |
| 0.2905 | 9.74 | 740 | 0.5061 |
| 0.3035 | 10.0 | 760 | 0.5047 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.0 |
quirky-lats-at-mats/base_rmu_2 | quirky-lats-at-mats | 2024-05-20T21:46:16Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T22:06:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
On our tasks WMDP evals:
wmdp-bio: 0.35
wmdp-cyber: 0.33
wmdp-chem: 0.44
mmlu: 0.61
steps to retraining 50% bio accuracy on 2 samples, batch size 2 (4 samples total): 8 (interpolated)
## Model Details
Retraining on bio unlearn run path: quirky_lats_at_mats/pgd_rmu_evals/pdr9xikq
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kalexa2/marian-finetuned-kde4-en-to-fr | kalexa2 | 2024-05-20T21:42:28Z | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-05-20T15:42:32Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Mahanthesh0r/BipedalWalker-RL | Mahanthesh0r | 2024-05-20T21:32:26Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"Bipedal",
"OpenAI",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-20T20:19:51Z | ---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- Bipedal
- OpenAI
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: '-58.54 +/- 39.24'
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
# **1. Setup**
### **Install Packages**
"""
# Install necessary packages
!apt install swig cmake ffmpeg xvfb python3-opengl
!pip install stable-baselines3==2.0.0a5 gymnasium[box2d] huggingface_sb3 pyvirtualdisplay imageio[ffmpeg]
"""The Next Cell will force the notebook runtime to restart. This is to ensure all the new libraries installed will be used."""
import os
os.kill(os.getpid(), 9)
"""### **Start Virtual Display**"""
from pyvirtualdisplay import Display
virtual_display = Display(visible=0, size=(1400, 900))
virtual_display.start()
"""### **Setup Environment**"""
import gymnasium as gym
env = gym.make("BipedalWalker-v3", hardcore=True)
env.reset()
"""### **Observation Space**
Observation Space Shape (24,) vector of size 24, where each value contains different information about the walker:
- **Hull Angle Speed**: The speed at which the main body of the walker is rotating.
- **Angular Velocity**: The rate of change of the angular position of the walker.
- **Horizontal Speed**: The speed at which the walker is moving horizontally.
- **Vertical Speed**: The speed at which the walker is moving vertically.
- **Position of Joints**: The positions (angles) of the walker's joints. Given that the walker has 4 joints, this take up 4 values.
- **Joints Angular Speed**: The rate of change of the angular position for each joint. Again, this would be 4 values for the 4 joints.
- **Legs Contact with Ground**: Indicating whether each leg is in contact with the ground. Given two legs, this contains 2 values.
- **10 Lidar Rangefinder Measurements**: These are distance measurements to detect obstacles or terrain features around the walker. There are 10 of these values.
"""
print("_____OBSERVATION SPACE_____ \n")
print("Observation Space Shape", env.observation_space.shape)
print("Sample observation", env.observation_space.sample()) # Get a random observation
"""### **Action Space**
Actions are motor speed values in the [-1, 1] range for each of the 4 joints at both hips and knees.
"""
print("\n _____ACTION SPACE_____ \n")
print("Action Space Shape", env.action_space.shape)
print("Action Space Sample", env.action_space.sample()) # Take a random action
"""### **Vectorized Environment**
Create a vectorized environment (a method for stacking multiple independent environments into a single environment) of 16 environments to have more diverse experiences.
"""
from stable_baselines3.common.env_util import make_vec_env
env = make_vec_env('BipedalWalker-v3', n_envs=16)
"""# **2. Building the Model**"""
from stable_baselines3 import PPO
model = PPO(
policy = 'MlpPolicy',
env = env,
n_steps = 2048,
batch_size = 128,
n_epochs = 6,
gamma = 0.999,
gae_lambda = 0.98,
ent_coef = 0.01,
verbose=1)
"""# 3.**Video Generation**"""
from wasabi import Printer
import numpy as np
from stable_baselines3.common.base_class import BaseAlgorithm
from pathlib import Path
import tempfile
from stable_baselines3.common.monitor import Monitor
from stable_baselines3.common.vec_env import (
DummyVecEnv,
VecEnv,
VecVideoRecorder,
)
msg = Printer()
def generate_replay(
model: BaseAlgorithm,
eval_env: VecEnv,
video_length: int,
is_deterministic: bool,
local_path: Path,
):
"""
Generate a replay video of the agent
:param model: trained model
:param eval_env: environment used to evaluate the agent
:param video_length: length of the video (in timesteps)
:param is_deterministic: use deterministic or stochastic actions
:param local_path: path of the local repository
"""
# This is another temporary directory for video outputs
# SB3 created a -step-0-to-... meta files as well as other
# artifacts which we don't want in the repo.
with tempfile.TemporaryDirectory() as tmpdirname:
# Step 1: Create the VecVideoRecorder
env = VecVideoRecorder(
eval_env,
tmpdirname,
record_video_trigger=lambda x: x == 0,
video_length=video_length,
name_prefix="",
)
obs = env.reset()
lstm_states = None
episode_starts = np.ones((env.num_envs,), dtype=bool)
try:
for _ in range(video_length):
action, lstm_states = model.predict(
obs,
state=lstm_states,
episode_start=episode_starts,
deterministic=is_deterministic,
)
obs, _, episode_starts, _ = env.step(action)
# Save the video
env.close()
# Convert the video with x264 codec
inp = env.video_recorder.path
out = local_path
os.system(f"ffmpeg -y -i {inp} -vcodec h264 {out}".format(inp, out))
print(f"Video saved to: {out}")
except KeyboardInterrupt:
pass
except Exception as e:
msg.fail(str(e))
# Add a message for video
msg.fail(
"We are unable to generate a replay of your agent"
)
"""# **4. Training, Saving and Record the Videos**"""
import os
#create a directory to save the videos
video_dir = "/content/videos"
if not os.path.exists(video_dir):
os.makedirs(video_dir)
env_id = "BipedalWalker-v3"
# Train and generate video at every 100000 steps, adjust the timesteps to your liking
for i in range(0, 2000000, 100000):
model.learn(total_timesteps=100000)
# Save the model
model_name = "ppo-BipedalWalker-v3"
model.save(model_name)
video_name = f"replay_{i + 100000}.mp4"
generate_replay(
model=model,
eval_env=DummyVecEnv([lambda: Monitor(gym.make(env_id, hardcore=True, render_mode="rgb_array"))]),
video_length=1000,
is_deterministic=True,
local_path=os.path.join(video_dir, video_name)
)
model_name = "ppo-BipedalWalker-v3"
model.save(model_name)
with open(os.path.join(video_dir, "filelist.txt"), "w") as f:
for i in range(0, 2000000, 100000):
video_name = f"replay_{i + 100000}.mp4"
f.write(f"file '{os.path.join(video_dir, video_name)}'\n")
# Concatenate all the videos into one
os.system(f"ffmpeg -f concat -safe 0 -i {os.path.join(video_dir, 'filelist.txt')} -c copy {os.path.join(video_dir, 'replay_all.mp4')}")
"""# **5. Visualize Final Video**"""
from IPython.display import HTML
from base64 import b64encode
mp4 = open('videos/replay_all.mp4','rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=600 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
"""# **6. Evaluate the Model**"""
from stable_baselines3.common.evaluation import evaluate_policy
eval_env = Monitor(gym.make("BipedalWalker-v3"))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
"""# **7. Upload to HuggingFace**"""
from huggingface_sb3 import load_from_hub, package_to_hub
from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.
notebook_login()
!git config --global credential.helper store
env_id = "BipedalWalker-v3"
model_name = "ppo-BipedalWalker-v3"
model_architecture = "PPO"
repo_id = "Mahanthesh0r/BipedalWalker-RL" # Change with your repo id
## Define the commit message
commit_message = "Upload PPO BipedalWalker-v3 trained agent"
# Create the evaluation env and set the render_mode="rgb_array"
eval_env = DummyVecEnv([lambda: gym.make(env_id, hardcore=True, render_mode="rgb_array")])
package_to_hub(model=model, # trained model
model_name=model_name, # The name of our trained model
model_architecture=model_architecture, # The model architecture we used: in our case PPO
env_id=env_id, # Name of the environment
eval_env=eval_env,
repo_id=repo_id,
commit_message=commit_message)
"""# **8. Load Models from HuggingFace (Optional)**"""
from huggingface_sb3 import load_from_hub
repo_id = "Mahanthesh0r/BipedalWalker-RL" # The repo_id
filename = "ppo-BipedalWalker-v3.zip" # The model filename.zip
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, print_system_info=True)
eval_env = Monitor(gym.make("BipedalWalker-v3", hardcore=True))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
...
``` |
BilalMuftuoglu/beit-base-patch16-224-85-fold5 | BilalMuftuoglu | 2024-05-20T21:30:20Z | 27 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T21:09:50Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-85-fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9318181818181818
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-85-fold5
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2499
- Accuracy: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 1.0896 | 0.2955 |
| No log | 2.0 | 4 | 0.6456 | 0.7273 |
| No log | 3.0 | 6 | 1.0355 | 0.7045 |
| No log | 4.0 | 8 | 0.9124 | 0.7045 |
| 0.7607 | 5.0 | 10 | 0.5809 | 0.7955 |
| 0.7607 | 6.0 | 12 | 0.6812 | 0.75 |
| 0.7607 | 7.0 | 14 | 0.6529 | 0.75 |
| 0.7607 | 8.0 | 16 | 0.7174 | 0.7273 |
| 0.7607 | 9.0 | 18 | 0.6619 | 0.6136 |
| 0.4221 | 10.0 | 20 | 0.8063 | 0.75 |
| 0.4221 | 11.0 | 22 | 0.6372 | 0.75 |
| 0.4221 | 12.0 | 24 | 0.5886 | 0.75 |
| 0.4221 | 13.0 | 26 | 0.6359 | 0.6364 |
| 0.4221 | 14.0 | 28 | 0.5585 | 0.75 |
| 0.3287 | 15.0 | 30 | 0.4541 | 0.7273 |
| 0.3287 | 16.0 | 32 | 0.7624 | 0.5682 |
| 0.3287 | 17.0 | 34 | 0.6806 | 0.75 |
| 0.3287 | 18.0 | 36 | 0.7708 | 0.7273 |
| 0.3287 | 19.0 | 38 | 0.4170 | 0.7273 |
| 0.3663 | 20.0 | 40 | 0.4282 | 0.7727 |
| 0.3663 | 21.0 | 42 | 0.5613 | 0.75 |
| 0.3663 | 22.0 | 44 | 0.4025 | 0.8409 |
| 0.3663 | 23.0 | 46 | 0.4109 | 0.7955 |
| 0.3663 | 24.0 | 48 | 0.4373 | 0.8409 |
| 0.2344 | 25.0 | 50 | 0.3211 | 0.8864 |
| 0.2344 | 26.0 | 52 | 0.5561 | 0.75 |
| 0.2344 | 27.0 | 54 | 0.3149 | 0.8636 |
| 0.2344 | 28.0 | 56 | 0.3166 | 0.7955 |
| 0.2344 | 29.0 | 58 | 0.4164 | 0.8636 |
| 0.2051 | 30.0 | 60 | 0.4345 | 0.8636 |
| 0.2051 | 31.0 | 62 | 0.3180 | 0.8636 |
| 0.2051 | 32.0 | 64 | 0.3673 | 0.8409 |
| 0.2051 | 33.0 | 66 | 0.4313 | 0.8409 |
| 0.2051 | 34.0 | 68 | 0.4359 | 0.8409 |
| 0.1694 | 35.0 | 70 | 0.3700 | 0.8182 |
| 0.1694 | 36.0 | 72 | 0.5843 | 0.7955 |
| 0.1694 | 37.0 | 74 | 0.4064 | 0.8636 |
| 0.1694 | 38.0 | 76 | 0.3992 | 0.8182 |
| 0.1694 | 39.0 | 78 | 0.3153 | 0.8636 |
| 0.1566 | 40.0 | 80 | 0.5581 | 0.8182 |
| 0.1566 | 41.0 | 82 | 0.2921 | 0.8636 |
| 0.1566 | 42.0 | 84 | 0.3217 | 0.8864 |
| 0.1566 | 43.0 | 86 | 0.3255 | 0.8864 |
| 0.1566 | 44.0 | 88 | 0.7238 | 0.75 |
| 0.1389 | 45.0 | 90 | 0.4053 | 0.8864 |
| 0.1389 | 46.0 | 92 | 0.2499 | 0.9318 |
| 0.1389 | 47.0 | 94 | 0.2584 | 0.8864 |
| 0.1389 | 48.0 | 96 | 0.4432 | 0.8409 |
| 0.1389 | 49.0 | 98 | 0.6965 | 0.7955 |
| 0.1311 | 50.0 | 100 | 0.3910 | 0.8409 |
| 0.1311 | 51.0 | 102 | 0.3017 | 0.8636 |
| 0.1311 | 52.0 | 104 | 0.3050 | 0.8636 |
| 0.1311 | 53.0 | 106 | 0.2193 | 0.8636 |
| 0.1311 | 54.0 | 108 | 0.2369 | 0.8409 |
| 0.1386 | 55.0 | 110 | 0.3143 | 0.8864 |
| 0.1386 | 56.0 | 112 | 0.2932 | 0.8864 |
| 0.1386 | 57.0 | 114 | 0.2725 | 0.9091 |
| 0.1386 | 58.0 | 116 | 0.5664 | 0.8409 |
| 0.1386 | 59.0 | 118 | 0.5875 | 0.8182 |
| 0.1194 | 60.0 | 120 | 0.4623 | 0.8636 |
| 0.1194 | 61.0 | 122 | 0.4716 | 0.8182 |
| 0.1194 | 62.0 | 124 | 0.5028 | 0.8182 |
| 0.1194 | 63.0 | 126 | 0.4558 | 0.8182 |
| 0.1194 | 64.0 | 128 | 0.4798 | 0.8182 |
| 0.1122 | 65.0 | 130 | 0.3827 | 0.8409 |
| 0.1122 | 66.0 | 132 | 0.3653 | 0.8409 |
| 0.1122 | 67.0 | 134 | 0.3972 | 0.8409 |
| 0.1122 | 68.0 | 136 | 0.5705 | 0.7727 |
| 0.1122 | 69.0 | 138 | 0.5935 | 0.7727 |
| 0.1041 | 70.0 | 140 | 0.3905 | 0.8636 |
| 0.1041 | 71.0 | 142 | 0.2791 | 0.8409 |
| 0.1041 | 72.0 | 144 | 0.2845 | 0.9091 |
| 0.1041 | 73.0 | 146 | 0.2401 | 0.8636 |
| 0.1041 | 74.0 | 148 | 0.2260 | 0.8864 |
| 0.0982 | 75.0 | 150 | 0.2454 | 0.8864 |
| 0.0982 | 76.0 | 152 | 0.3773 | 0.8864 |
| 0.0982 | 77.0 | 154 | 0.6185 | 0.8182 |
| 0.0982 | 78.0 | 156 | 0.7238 | 0.7727 |
| 0.0982 | 79.0 | 158 | 0.5469 | 0.8409 |
| 0.1065 | 80.0 | 160 | 0.4318 | 0.8636 |
| 0.1065 | 81.0 | 162 | 0.3348 | 0.8864 |
| 0.1065 | 82.0 | 164 | 0.3041 | 0.8636 |
| 0.1065 | 83.0 | 166 | 0.3350 | 0.8864 |
| 0.1065 | 84.0 | 168 | 0.3464 | 0.8864 |
| 0.0829 | 85.0 | 170 | 0.3375 | 0.8864 |
| 0.0829 | 86.0 | 172 | 0.3309 | 0.8864 |
| 0.0829 | 87.0 | 174 | 0.3325 | 0.8864 |
| 0.0829 | 88.0 | 176 | 0.3441 | 0.8864 |
| 0.0829 | 89.0 | 178 | 0.3456 | 0.8636 |
| 0.0902 | 90.0 | 180 | 0.3244 | 0.8636 |
| 0.0902 | 91.0 | 182 | 0.3126 | 0.8636 |
| 0.0902 | 92.0 | 184 | 0.3117 | 0.8636 |
| 0.0902 | 93.0 | 186 | 0.2877 | 0.8636 |
| 0.0902 | 94.0 | 188 | 0.2643 | 0.8636 |
| 0.0838 | 95.0 | 190 | 0.2525 | 0.8864 |
| 0.0838 | 96.0 | 192 | 0.2462 | 0.9091 |
| 0.0838 | 97.0 | 194 | 0.2417 | 0.9091 |
| 0.0838 | 98.0 | 196 | 0.2402 | 0.9091 |
| 0.0838 | 99.0 | 198 | 0.2409 | 0.9091 |
| 0.0747 | 100.0 | 200 | 0.2426 | 0.9091 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
eventdata-utd/ConfliBERT-cont-uncased-BBC_News | eventdata-utd | 2024-05-20T21:27:27Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-04-04T20:05:30Z | ---
language:
- en
---
Political news dataset from BBC for relevance classification. |
eventdata-utd/ConfliBERT-cont-cased-BBC_News | eventdata-utd | 2024-05-20T21:26:26Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2024-04-04T20:05:47Z | ---
language:
- en
---
Political news dataset from BBC for relevance classification. |
sxg2520/tiny-chatbot-dpo | sxg2520 | 2024-05-20T21:25:59Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T21:23:56Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: tiny-chatbot-dpo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-chatbot-dpo
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
beetlespinner/bloomer | beetlespinner | 2024-05-20T21:22:05Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-05-20T21:22:05Z | ---
license: other
license_name: beetlespinner
license_link: LICENSE
---
|
0xBeaverT/organize-breakfast-2 | 0xBeaverT | 2024-05-20T21:15:18Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T21:11:46Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sxg2520/sft-tiny-chatbot | sxg2520 | 2024-05-20T21:15:17Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T21:13:57Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model-index:
- name: sft-tiny-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-994439 | fine-tuned | 2024-05-20T21:11:20Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Argumentation",
"Corpus",
"Research",
"Dataset",
"Quality",
"custom_code",
"en",
"dataset:fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-994439",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-20T21:11:05Z | ---
license: apache-2.0
datasets:
- fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-994439
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Argumentation
- Corpus
- Research
- Dataset
- Quality
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
academic research data retrieval
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-994439',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
BilalMuftuoglu/beit-base-patch16-224-85-fold4 | BilalMuftuoglu | 2024-05-20T21:09:42Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T20:49:03Z | ---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-85-fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9545454545454546
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-85-fold4
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1725
- Accuracy: 0.9545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6874 | 0.6818 |
| No log | 2.0 | 4 | 0.7083 | 0.7045 |
| No log | 3.0 | 6 | 0.8871 | 0.7045 |
| No log | 4.0 | 8 | 0.7305 | 0.7045 |
| 0.6246 | 5.0 | 10 | 0.5791 | 0.7045 |
| 0.6246 | 6.0 | 12 | 0.5888 | 0.7045 |
| 0.6246 | 7.0 | 14 | 0.6050 | 0.7045 |
| 0.6246 | 8.0 | 16 | 0.5468 | 0.7045 |
| 0.6246 | 9.0 | 18 | 0.5351 | 0.7045 |
| 0.4453 | 10.0 | 20 | 0.4155 | 0.8409 |
| 0.4453 | 11.0 | 22 | 0.8266 | 0.7045 |
| 0.4453 | 12.0 | 24 | 0.3905 | 0.8409 |
| 0.4453 | 13.0 | 26 | 0.3942 | 0.8409 |
| 0.4453 | 14.0 | 28 | 0.4015 | 0.8409 |
| 0.3613 | 15.0 | 30 | 0.3474 | 0.8182 |
| 0.3613 | 16.0 | 32 | 0.4763 | 0.8182 |
| 0.3613 | 17.0 | 34 | 0.3894 | 0.7955 |
| 0.3613 | 18.0 | 36 | 0.4290 | 0.7955 |
| 0.3613 | 19.0 | 38 | 0.3525 | 0.8636 |
| 0.2928 | 20.0 | 40 | 0.3426 | 0.8864 |
| 0.2928 | 21.0 | 42 | 0.4060 | 0.8182 |
| 0.2928 | 22.0 | 44 | 0.6962 | 0.75 |
| 0.2928 | 23.0 | 46 | 0.3514 | 0.8636 |
| 0.2928 | 24.0 | 48 | 0.5302 | 0.8409 |
| 0.2256 | 25.0 | 50 | 0.3094 | 0.8636 |
| 0.2256 | 26.0 | 52 | 0.2977 | 0.8636 |
| 0.2256 | 27.0 | 54 | 0.4883 | 0.8182 |
| 0.2256 | 28.0 | 56 | 0.3008 | 0.8409 |
| 0.2256 | 29.0 | 58 | 0.3226 | 0.8409 |
| 0.2231 | 30.0 | 60 | 0.4101 | 0.8409 |
| 0.2231 | 31.0 | 62 | 0.3197 | 0.8409 |
| 0.2231 | 32.0 | 64 | 0.4133 | 0.7727 |
| 0.2231 | 33.0 | 66 | 0.2923 | 0.8636 |
| 0.2231 | 34.0 | 68 | 0.4391 | 0.8636 |
| 0.1756 | 35.0 | 70 | 0.3016 | 0.8636 |
| 0.1756 | 36.0 | 72 | 0.2749 | 0.9091 |
| 0.1756 | 37.0 | 74 | 0.3146 | 0.8864 |
| 0.1756 | 38.0 | 76 | 0.3095 | 0.8409 |
| 0.1756 | 39.0 | 78 | 0.3017 | 0.8864 |
| 0.1592 | 40.0 | 80 | 0.2762 | 0.8864 |
| 0.1592 | 41.0 | 82 | 0.4054 | 0.8409 |
| 0.1592 | 42.0 | 84 | 0.2787 | 0.8864 |
| 0.1592 | 43.0 | 86 | 0.3193 | 0.8636 |
| 0.1592 | 44.0 | 88 | 0.2783 | 0.9091 |
| 0.1857 | 45.0 | 90 | 0.2934 | 0.9091 |
| 0.1857 | 46.0 | 92 | 0.3579 | 0.8636 |
| 0.1857 | 47.0 | 94 | 0.3501 | 0.8864 |
| 0.1857 | 48.0 | 96 | 0.3358 | 0.8864 |
| 0.1857 | 49.0 | 98 | 0.2981 | 0.9091 |
| 0.1179 | 50.0 | 100 | 0.3364 | 0.8636 |
| 0.1179 | 51.0 | 102 | 0.3324 | 0.8636 |
| 0.1179 | 52.0 | 104 | 0.1725 | 0.9545 |
| 0.1179 | 53.0 | 106 | 0.1222 | 0.9545 |
| 0.1179 | 54.0 | 108 | 0.1500 | 0.9091 |
| 0.1448 | 55.0 | 110 | 0.2358 | 0.9091 |
| 0.1448 | 56.0 | 112 | 0.2224 | 0.9091 |
| 0.1448 | 57.0 | 114 | 0.1457 | 0.9318 |
| 0.1448 | 58.0 | 116 | 0.1745 | 0.9318 |
| 0.1448 | 59.0 | 118 | 0.1990 | 0.9091 |
| 0.1343 | 60.0 | 120 | 0.2905 | 0.8864 |
| 0.1343 | 61.0 | 122 | 0.3842 | 0.8864 |
| 0.1343 | 62.0 | 124 | 0.3031 | 0.8864 |
| 0.1343 | 63.0 | 126 | 0.2642 | 0.8864 |
| 0.1343 | 64.0 | 128 | 0.2412 | 0.9091 |
| 0.1109 | 65.0 | 130 | 0.3347 | 0.8864 |
| 0.1109 | 66.0 | 132 | 0.4005 | 0.8864 |
| 0.1109 | 67.0 | 134 | 0.2905 | 0.8864 |
| 0.1109 | 68.0 | 136 | 0.3168 | 0.9318 |
| 0.1109 | 69.0 | 138 | 0.3845 | 0.8864 |
| 0.1221 | 70.0 | 140 | 0.3178 | 0.9318 |
| 0.1221 | 71.0 | 142 | 0.2690 | 0.9318 |
| 0.1221 | 72.0 | 144 | 0.2516 | 0.8864 |
| 0.1221 | 73.0 | 146 | 0.2347 | 0.9091 |
| 0.1221 | 74.0 | 148 | 0.2376 | 0.9318 |
| 0.1191 | 75.0 | 150 | 0.2480 | 0.9318 |
| 0.1191 | 76.0 | 152 | 0.2597 | 0.9091 |
| 0.1191 | 77.0 | 154 | 0.3071 | 0.9091 |
| 0.1191 | 78.0 | 156 | 0.3354 | 0.9091 |
| 0.1191 | 79.0 | 158 | 0.2988 | 0.8864 |
| 0.1133 | 80.0 | 160 | 0.2760 | 0.9091 |
| 0.1133 | 81.0 | 162 | 0.2832 | 0.9318 |
| 0.1133 | 82.0 | 164 | 0.2793 | 0.9318 |
| 0.1133 | 83.0 | 166 | 0.2779 | 0.9091 |
| 0.1133 | 84.0 | 168 | 0.3004 | 0.8864 |
| 0.098 | 85.0 | 170 | 0.3275 | 0.8864 |
| 0.098 | 86.0 | 172 | 0.3394 | 0.8864 |
| 0.098 | 87.0 | 174 | 0.3257 | 0.8864 |
| 0.098 | 88.0 | 176 | 0.3172 | 0.8864 |
| 0.098 | 89.0 | 178 | 0.3122 | 0.9318 |
| 0.0917 | 90.0 | 180 | 0.3208 | 0.9318 |
| 0.0917 | 91.0 | 182 | 0.3236 | 0.9318 |
| 0.0917 | 92.0 | 184 | 0.3274 | 0.9318 |
| 0.0917 | 93.0 | 186 | 0.3331 | 0.8864 |
| 0.0917 | 94.0 | 188 | 0.3379 | 0.8864 |
| 0.0989 | 95.0 | 190 | 0.3404 | 0.8864 |
| 0.0989 | 96.0 | 192 | 0.3452 | 0.8864 |
| 0.0989 | 97.0 | 194 | 0.3491 | 0.8864 |
| 0.0989 | 98.0 | 196 | 0.3489 | 0.8864 |
| 0.0989 | 99.0 | 198 | 0.3482 | 0.8864 |
| 0.0916 | 100.0 | 200 | 0.3467 | 0.8864 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ClaudioItaly/TopEvolution-Q5_K_M-GGUF | ClaudioItaly | 2024-05-20T21:07:43Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"base_model:merge:mergekit-community/mergekit-slerp-ebgdloh",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T21:07:29Z | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- mergekit-community/mergekit-slerp-ebgdloh
---
# ClaudioItaly/TopEvolution-Q5_K_M-GGUF
This model was converted to GGUF format from [`mergekit-community/TopEvolution`](https://huggingface.co/mergekit-community/TopEvolution) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mergekit-community/TopEvolution) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ClaudioItaly/TopEvolution-Q5_K_M-GGUF --model topevolution.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ClaudioItaly/TopEvolution-Q5_K_M-GGUF --model topevolution.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m topevolution.Q5_K_M.gguf -n 128
```
|
matthieuzone/PARMESANbis | matthieuzone | 2024-05-20T21:07:23Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T20:59:12Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/PARMESANbis
<Gallery />
## Model description
These are matthieuzone/PARMESANbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/PARMESANbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
msbayindir/mt5-small-sample | msbayindir | 2024-05-20T21:06:44Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-19T00:29:37Z | ---
library_name: transformers
language:
- tr
metrics:
- bleu
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
<p>
This model(10-epoch) has been trained with the dataset
<a href="https://github.com/okanvk/Turkish-Reading-Comprehension-Question-Answering-Dataset">Turkish-Reading-Comprehension-Question-Answering-Dataset </a>for Turkish question-answering
</p>
<h1>Usage</h1>
<h2>install package</h2>
<ul>
<li>!pip install transformers</li>
<li>!pip install evaluate</li>
<li>!pip install rouge</li>
</ul>
<h2>Codes</h2>
<h3>Imports</h3>
```python
import torch
import nltk
import string
import evaluate # Bleu
import pandas as pd
import numpy as np
from transformers import T5Tokenizer, MT5Model, MT5ForConditionalGeneration, MT5TokenizerFast
import warnings
```
```python
TOKENIZER = MT5TokenizerFast.from_pretrained("msbayindir/mt5-small-sample")
MODEL = MT5ForConditionalGeneration.from_pretrained("msbayindir/mt5-small-sample", return_dict=True)
DEVICE = "cuda:0"
MODEL = MODEL.to(device=DEVICE)
Q_LEN = 256 # Question Length
T_LEN = 32
```
```python
def predict_answer(context, question, ref_answer=None):
inputs = TOKENIZER(question, context, max_length=Q_LEN, padding="max_length", truncation=True, add_special_tokens=True)
input_ids = torch.tensor(inputs["input_ids"], dtype=torch.long).to(DEVICE).unsqueeze(0)
attention_mask = torch.tensor(inputs["attention_mask"], dtype=torch.long).to(DEVICE).unsqueeze(0)
outputs = MODEL.generate(input_ids=input_ids, attention_mask=attention_mask)
predicted_answer = TOKENIZER.decode(outputs.flatten(), skip_special_tokens=True)
if ref_answer:
# Load the Bleu metric
bleu = evaluate.load("google_bleu")
score = bleu.compute(predictions=[predicted_answer],
references=[ref_answer])
print("Context: \n", context)
print("\n")
print("Question: \n", question)
return {
"Reference Answer: ": ref_answer,
"Predicted Answer: ": predicted_answer,
"BLEU Score: ": score
}
else:
return predicted_answer
```
###Samples
```python
context = """Katmandu BΓΌyΓΌkΕehir Εehri (KMC),
uluslararasΔ± iliΕkileri teΕvik etmek amacΔ±yla UluslararasΔ± Δ°liΕkiler SekreterliΔi (IRC) kurmuΕtur.
KMC'nin ilk uluslararasΔ± iliΕkisi 1975 yΔ±lΔ±nda Eugene, Oregon, Amerika BirleΕik Devletleri ile kurulmuΕtur.
Bu etkinlik, diΔer 8 Εehirle resmi iliΕkiler kurarak daha da geliΕtirilmiΕtir:
Motsumoto City of Japan, Rochester, Myanmar Yangon (eski adΔ±yla Rangoon),
Γin Halk Cumhuriyeti'nden Xi'an, Belarus Minsk ve Kore Demokratik Cumhuriyeti'nden Pyongyang. KMC'nin sΓΌrekli Γ§abasΔ±,
Katmandu iΓ§in daha iyi kentsel yΓΆnetim ve geliΕim programlarΔ± elde etmek iΓ§in SAARC ΓΌlkeleri, diΔer UluslararasΔ± ajanslar ve
dΓΌnyanΔ±n diΔer birΓ§ok bΓΌyΓΌk Εehirleri ile etkileΕimini geliΕtirmektir."""
answer = "Katmandu ilk uluslararasΔ± iliΕkisini hangi yΔ±lda yarattΔ±?"
predict_answer(context,answer)
'1975'
```
```python
context = """Yapay zeka (YZ), modern dΓΌnyada hΔ±zla geliΕen bir teknoloji alanΔ±dΔ±r. YZ, saΔlΔ±k, finans ve eΔitim gibi birΓ§ok sektΓΆrde kullanΔ±lmaktadΔ±r.
SaΔlΔ±k alanΔ±nda, YZ algoritmalarΔ± hastalΔ±k teΕhislerinde yardΔ±mcΔ± olabilir. Finans sektΓΆrΓΌnde, YZ, piyasa analizleri yaparak yatΔ±rΔ±m kararlarΔ±nΔ± destekler.
EΔitimde ise, ΓΆΔrenci performansΔ±nΔ± izler ve bireyselleΕtirilmiΕ ΓΆΔrenme programlarΔ± oluΕturur.
Ancak, YZ'nin kullanΔ±mΔ± etik ve gizlilik konularΔ±nda endiΕeler doΔurur.
Veri gΓΌvenliΔi ve algoritmik ΓΆnyargΔ±lar, dikkatle ele alΔ±nmasΔ± gereken ΓΆnemli meselelerdir."""
answer = "Yapay zeka hangi sektΓΆrlerde kullanΔ±lmaktadΔ±r?"
predict_answer(context,answer)
'saΔlΔ±k, finans ve eΔitim'
``` |
hazyresearch/M2-BERT-128-Retrieval-Encoder-V1 | hazyresearch | 2024-05-20T21:06:23Z | 126 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"en",
"arxiv:2402.07440",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2024-02-10T00:34:16Z | ---
license: apache-2.0
language:
- en
pipeline_tag: fill-mask
inference: false
---
# Monarch Mixer-BERT
The 80M checkpoint for M2-BERT-128 from the paper [Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT](https://arxiv.org/abs/2402.07440).
Check out our [GitHub](https://github.com/HazyResearch/m2/tree/main) for instructions on how to download and fine-tune it!
## How to use
You can load this model using Hugging Face `AutoModel`:
```python
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("jonsaadfalcon/M2-BERT-128-Retrieval-Encoder-V1", trust_remote_code=True)
```
This model uses the Hugging Face `bert-base-uncased tokenizer`:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
```
## How to use
This model generates embeddings for retrieval. The embeddings have a dimensionality of 768:
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
max_seq_length = 128
testing_string = "Every morning, I make a cup of coffee to start my day."
model = AutoModelForMaskedLM.from_pretrained("jonsaadfalcon/M2-BERT-128-Retrieval-Encoder-V1", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", model_max_length=max_seq_length)
input_ids = tokenizer([testing_string], return_tensors="pt", padding="max_length", return_token_type_ids=False, truncation=True, max_length=max_seq_length)
outputs = model(**input_ids)
embeddings = outputs['sentence_embedding']
```
### Remote Code
This model requires `trust_remote_code=True` to be passed to the `from_pretrained` method. This is because we use custom PyTorch code (see our GitHub). You should consider passing a `revision` argument that specifies the exact git commit of the code, for example:
```python
mlm = AutoModelForMaskedLM.from_pretrained(
"jonsaadfalcon/M2-BERT-128-Retrieval-Encoder-V1",
trust_remote_code=True,
)
```
### Configuration
Note `use_flash_mm` is false by default. Using FlashMM is currently not supported. |
yifanxie/jasper-cicada-1 | yifanxie | 2024-05-20T21:03:38Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-20T21:01:37Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="yifanxie/jasper-cicada-1",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<eos><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"yifanxie/jasper-cicada-1",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"yifanxie/jasper-cicada-1",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yifanxie/jasper-cicada-1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<eos><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 256
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GemmaForCausalLM(
(model): GemmaModel(
(embed_tokens): Embedding(256000, 2048, padding_idx=0)
(layers): ModuleList(
(0-17): 18 x GemmaDecoderLayer(
(self_attn): GemmaSdpaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=256, bias=False)
(v_proj): Linear(in_features=2048, out_features=256, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(rotary_emb): GemmaRotaryEmbedding()
)
(mlp): GemmaMLP(
(gate_proj): Linear(in_features=2048, out_features=16384, bias=False)
(up_proj): Linear(in_features=2048, out_features=16384, bias=False)
(down_proj): Linear(in_features=16384, out_features=2048, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): GemmaRMSNorm()
(post_attention_layernorm): GemmaRMSNorm()
)
)
(norm): GemmaRMSNorm()
)
(lm_head): Linear(in_features=2048, out_features=256000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
mergekit-community/TopEvolution | mergekit-community | 2024-05-20T21:01:09Z | 12 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"base_model:merge:mergekit-community/mergekit-slerp-ebgdloh",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T20:53:23Z | ---
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- mergekit-community/mergekit-slerp-ebgdloh
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [mergekit-community/mergekit-slerp-ebgdloh](https://huggingface.co/mergekit-community/mergekit-slerp-ebgdloh)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: mergekit-community/mergekit-slerp-ebgdloh
merge_method: slerp
base_model: mergekit-community/mergekit-slerp-ebgdloh
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
msbayindir/sample-mt5-small | msbayindir | 2024-05-20T21:00:44Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-16T16:01:54Z | ---
library_name: transformers
language:
- tr
metrics:
- bleu
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
<p>
This model has been trained with the dataset
<a href="https://github.com/okanvk/Turkish-Reading-Comprehension-Question-Answering-Dataset">Turkish-Reading-Comprehension-Question-Answering-Dataset </a>for Turkish question-answering
</p>
<h1>Usage</h1>
<h2>install package</h2>
<ul>
<li>!pip install transformers</li>
<li>!pip install evaluate</li>
<li>!pip install rouge</li>
</ul>
<h2>Codes</h2>
<h3>Imports</h3>
```python
import torch
import nltk
import string
import evaluate # Bleu
import pandas as pd
import numpy as np
from transformers import T5Tokenizer, MT5Model, MT5ForConditionalGeneration, MT5TokenizerFast
import warnings
```
```python
TOKENIZER = MT5TokenizerFast.from_pretrained("msbayindir/sample-mt5-small")
MODEL = MT5ForConditionalGeneration.from_pretrained("msbayindir/sample-mt5-small", return_dict=True)
DEVICE = "cuda:0"
MODEL = MODEL.to(device=DEVICE)
Q_LEN = 256 # Question Length
T_LEN = 32
```
```python
def predict_answer(context, question, ref_answer=None):
inputs = TOKENIZER(question, context, max_length=Q_LEN, padding="max_length", truncation=True, add_special_tokens=True)
input_ids = torch.tensor(inputs["input_ids"], dtype=torch.long).to(DEVICE).unsqueeze(0)
attention_mask = torch.tensor(inputs["attention_mask"], dtype=torch.long).to(DEVICE).unsqueeze(0)
outputs = MODEL.generate(input_ids=input_ids, attention_mask=attention_mask)
predicted_answer = TOKENIZER.decode(outputs.flatten(), skip_special_tokens=True)
if ref_answer:
# Load the Bleu metric
bleu = evaluate.load("google_bleu")
score = bleu.compute(predictions=[predicted_answer],
references=[ref_answer])
print("Context: \n", context)
print("\n")
print("Question: \n", question)
return {
"Reference Answer: ": ref_answer,
"Predicted Answer: ": predicted_answer,
"BLEU Score: ": score
}
else:
return predicted_answer
```
###Samples
```python
context = """Katmandu BΓΌyΓΌkΕehir Εehri (KMC),
uluslararasΔ± iliΕkileri teΕvik etmek amacΔ±yla UluslararasΔ± Δ°liΕkiler SekreterliΔi (IRC) kurmuΕtur.
KMC'nin ilk uluslararasΔ± iliΕkisi 1975 yΔ±lΔ±nda Eugene, Oregon, Amerika BirleΕik Devletleri ile kurulmuΕtur.
Bu etkinlik, diΔer 8 Εehirle resmi iliΕkiler kurarak daha da geliΕtirilmiΕtir:
Motsumoto City of Japan, Rochester, Myanmar Yangon (eski adΔ±yla Rangoon),
Γin Halk Cumhuriyeti'nden Xi'an, Belarus Minsk ve Kore Demokratik Cumhuriyeti'nden Pyongyang. KMC'nin sΓΌrekli Γ§abasΔ±,
Katmandu iΓ§in daha iyi kentsel yΓΆnetim ve geliΕim programlarΔ± elde etmek iΓ§in SAARC ΓΌlkeleri, diΔer UluslararasΔ± ajanslar ve
dΓΌnyanΔ±n diΔer birΓ§ok bΓΌyΓΌk Εehirleri ile etkileΕimini geliΕtirmektir."""
answer = "Katmandu ilk uluslararasΔ± iliΕkisini hangi yΔ±lda yarattΔ±?"
predict_answer(context,answer)
'1975'
```
```python
context = """Yapay zeka (YZ), modern dΓΌnyada hΔ±zla geliΕen bir teknoloji alanΔ±dΔ±r. YZ, saΔlΔ±k, finans ve eΔitim gibi birΓ§ok sektΓΆrde kullanΔ±lmaktadΔ±r.
SaΔlΔ±k alanΔ±nda, YZ algoritmalarΔ± hastalΔ±k teΕhislerinde yardΔ±mcΔ± olabilir. Finans sektΓΆrΓΌnde, YZ, piyasa analizleri yaparak yatΔ±rΔ±m kararlarΔ±nΔ± destekler.
EΔitimde ise, ΓΆΔrenci performansΔ±nΔ± izler ve bireyselleΕtirilmiΕ ΓΆΔrenme programlarΔ± oluΕturur.
Ancak, YZ'nin kullanΔ±mΔ± etik ve gizlilik konularΔ±nda endiΕeler doΔurur.
Veri gΓΌvenliΔi ve algoritmik ΓΆnyargΔ±lar, dikkatle ele alΔ±nmasΔ± gereken ΓΆnemli meselelerdir."""
answer = "Yapay zeka hangi sektΓΆrlerde kullanΔ±lmaktadΔ±r?"
predict_answer(context,answer)
'saΔlΔ±k, finans ve eΔitim'
``` |
matthieuzone/OSSAU-_IRATYbis | matthieuzone | 2024-05-20T20:58:56Z | 4 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T20:50:46Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/OSSAU-_IRATYbis
<Gallery />
## Model description
These are matthieuzone/OSSAU-_IRATYbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/OSSAU-_IRATYbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
maneln/tinyllamaconvo | maneln | 2024-05-20T20:54:45Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T19:57:50Z | ---
license: apache-2.0
---
|
rajiv-data-chef/smaug-8b-finetuned-entity-level-sentiment-analysis | rajiv-data-chef | 2024-05-20T20:49:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T20:49:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LoneStriker/Yi-1.5-34B-32K-8.0bpw-h8-exl2 | LoneStriker | 2024-05-20T20:49:49Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T20:37:34Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
powermove72/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-Q4_K_M-GGUF | powermove72 | 2024-05-20T20:46:52Z | 0 | 0 | null | [
"gguf",
"moe",
"DPO",
"RL-TUNED",
"llama-cpp",
"gguf-my-repo",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T20:38:09Z | ---
license: mit
tags:
- moe
- DPO
- RL-TUNED
- llama-cpp
- gguf-my-repo
---
# powermove72/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-Q4_K_M-GGUF
This model was converted to GGUF format from [`yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B`](https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo powermove72/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-Q4_K_M-GGUF --model truthful_dpo_tomgrc_fusionnet_7bx2_moe_13b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo powermove72/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-Q4_K_M-GGUF --model truthful_dpo_tomgrc_fusionnet_7bx2_moe_13b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m truthful_dpo_tomgrc_fusionnet_7bx2_moe_13b.Q4_K_M.gguf -n 128
```
|
ulkaa/gemma-2b-dolly-qa | ulkaa | 2024-05-20T20:46:32Z | 13 | 1 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:other",
"region:us"
] | null | 2024-03-16T19:44:05Z | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: gemma-2b-dolly-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-dolly-qa
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 296
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0a0+cxx11.abi
- Datasets 2.18.0
- Tokenizers 0.15.2 |
fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-264015 | fine-tuned | 2024-05-20T20:44:59Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-264015",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-20T20:44:44Z | ---
license: apache-2.0
datasets:
- fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-264015
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/arguana-c-256-24-gpt-4o-2024-05-13-264015',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
seregadgl101/baii_v12_14ep | seregadgl101 | 2024-05-20T20:42:33Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-20T20:40:33Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# seregadgl101/baii_v12_14ep
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('seregadgl101/baii_v12_14ep')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=seregadgl101/baii_v12_14ep)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
matthieuzone/MUNSTERbis | matthieuzone | 2024-05-20T20:42:11Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-20T20:33:59Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MUNSTERbis
<Gallery />
## Model description
These are matthieuzone/MUNSTERbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MUNSTERbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
NAMJOON/kisa-fine-tuned2 | NAMJOON | 2024-05-20T20:39:36Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-20T20:28:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LoneStriker/Yi-1.5-34B-32K-6.0bpw-h6-exl2 | LoneStriker | 2024-05-20T20:37:30Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T20:26:37Z | ---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">π GitHub</a> β’
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | β’ [π€ Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β’ [π€ ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
OpenLLM-Ro/RoLlama2-7b-Instruct-GPTQ-4Bits-wikitext2 | OpenLLM-Ro | 2024-05-20T20:34:30Z | 88 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-20T20:26:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
GPTQ 4Bit quantised version of RoLlama2-7b-Instruct
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pbruna/distilbert-base-uncased-finetuned-emotion | pbruna | 2024-05-20T20:30:40Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T20:11:30Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
- name: F1
type: f1
value: 0.9376104649098359
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1518
- Accuracy: 0.9375
- F1: 0.9376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1058 | 1.0 | 250 | 0.1741 | 0.933 | 0.9333 |
| 0.1051 | 2.0 | 500 | 0.1518 | 0.9375 | 0.9376 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Subsets and Splits