modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 06:27:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
T3Q-LLM/T3Q-LLM2-sft1.2 | T3Q-LLM | 2024-05-24T01:45:51Z | 41 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T00:16:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Evaluation
hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM2-sft1.2,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.9416|± |0.0063|
| | |macro_f1|0.9415|± |0.0063|
|kobest_copa | 0|acc |0.7730|± |0.0133|
| | |macro_f1|0.7725|± |0.0133|
|kobest_hellaswag| 0|acc |0.5100|± |0.0224|
| | |acc_norm|0.5740|± |0.0221|
| | |macro_f1|0.5074|± |0.0223|
|kobest_sentineg | 0|acc |0.7632|± |0.0214|
| | |macro_f1|0.7545|± |0.0220| |
Liuza1/test1111 | Liuza1 | 2024-05-24T01:45:42Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-24T01:45:42Z | ---
license: apache-2.0
---
|
Timeshift/distilbert-base-uncased-finetuned-emotion | Timeshift | 2024-05-24T01:45:25Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-24T01:40:44Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9268799638507115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2204
- Accuracy: 0.927
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8393 | 1.0 | 250 | 0.3184 | 0.904 | 0.9027 |
| 0.2532 | 2.0 | 500 | 0.2204 | 0.927 | 0.9269 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jspr/talosian_v3_merged | jspr | 2024-05-24T01:43:06Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:33:50Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: mistralai/Mistral-7B-v0.3
---
# Uploaded model
- **Developed by:** jspr
- **License:** apache-2.0
- **Finetuned from model :** mistralai/Mistral-7B-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gaianet/Llama-3-8B-Instruct-GGUF | gaianet | 2024-05-24T01:39:13Z | 450 | 1 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-11T12:24:42Z | ---
license: apache-2.0
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/GaiaNet-AI/.github/assets/45785633/d6976adc-f97d-4f86-a648-0f2f5c8e7eee" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama-3-8B-Instruct-GGUF
## Original Model
[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Run with Gaianet
**Prompt template:**
prompt template: `llama-3-chat`
**Context size:**
chat_ctx_size: `4096`
**Run with GaiaNet:**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize |
AlignmentResearch/robust_llm_pythia-31m-imdb-gen-ian-nd | AlignmentResearch | 2024-05-24T01:28:36Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:28:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlignmentResearch/robust_llm_pythia-14m-imdb-gen-ian-nd | AlignmentResearch | 2024-05-24T01:26:36Z | 148 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:26:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Mistral-C64Wizard-instruct-GGUF | mradermacher | 2024-05-24T01:14:39Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:pechaut/Mistral-C64Wizard-instruct",
"base_model:quantized:pechaut/Mistral-C64Wizard-instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-23T23:51:33Z | ---
base_model: pechaut/Mistral-C64Wizard-instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/pechaut/Mistral-C64Wizard-instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-C64Wizard-instruct-GGUF/resolve/main/Mistral-C64Wizard-instruct.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
junaidiqbalsyed/juanid-phi3-instruct-finetuned | junaidiqbalsyed | 2024-05-24T01:10:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T01:10:08Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** junaidiqbalsyed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
T3Q-LLM/T3Q-LLM2-FP-v2.0 | T3Q-LLM | 2024-05-24T01:08:46Z | 45 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T04:54:06Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Evaluation
hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM2-FP-v2.0,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5085|± |0.0133|
| | |macro_f1|0.3496|± |0.0076|
|kobest_copa | 0|acc |0.7680|± |0.0134|
| | |macro_f1|0.7677|± |0.0134|
|kobest_hellaswag| 0|acc |0.4920|± |0.0224|
| | |acc_norm|0.5740|± |0.0221|
| | |macro_f1|0.4889|± |0.0223|
|kobest_sentineg | 0|acc |0.6826|± |0.0234|
| | |macro_f1|0.6616|± |0.0244| |
mradermacher/Athena-Mistral-7b-v0.2-GGUF | mradermacher | 2024-05-24T01:07:25Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"dataset:NotAiLOL/Athena-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-13T21:52:09Z | ---
base_model: NotAiLOL/Athena-Mistral-7b-v0.2
datasets:
- NotAiLOL/Athena-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Athena-Mistral-7b-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-Mistral-7b-v0.2-GGUF/resolve/main/Athena-Mistral-7b-v0.2.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hgnoi/4zorKjh2MTNaftD3 | hgnoi | 2024-05-24T01:05:26Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T01:03:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Augusto777/vit-base-patch16-224-RU3-40 | Augusto777 | 2024-05-24T01:04:43Z | 218 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-24T00:36:28Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-RU3-40
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8333333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-RU3-40
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5667
- Accuracy: 0.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3821 | 0.99 | 19 | 1.3119 | 0.4833 |
| 1.2698 | 1.97 | 38 | 1.0852 | 0.6167 |
| 0.9819 | 2.96 | 57 | 0.8757 | 0.7 |
| 0.6671 | 4.0 | 77 | 0.7689 | 0.7333 |
| 0.4248 | 4.99 | 96 | 0.7294 | 0.7167 |
| 0.3005 | 5.97 | 115 | 0.6518 | 0.7833 |
| 0.2035 | 6.96 | 134 | 0.5667 | 0.8333 |
| 0.2195 | 8.0 | 154 | 0.6646 | 0.8333 |
| 0.1654 | 8.99 | 173 | 0.6294 | 0.8167 |
| 0.1581 | 9.97 | 192 | 0.7211 | 0.7833 |
| 0.1338 | 10.96 | 211 | 0.8129 | 0.7833 |
| 0.1188 | 12.0 | 231 | 0.7925 | 0.8167 |
| 0.1179 | 12.99 | 250 | 0.9588 | 0.7667 |
| 0.1017 | 13.97 | 269 | 1.0875 | 0.7167 |
| 0.0845 | 14.96 | 288 | 0.9355 | 0.7 |
| 0.1109 | 16.0 | 308 | 0.9387 | 0.8167 |
| 0.0711 | 16.99 | 327 | 1.1214 | 0.7333 |
| 0.0884 | 17.97 | 346 | 0.9688 | 0.7667 |
| 0.0668 | 18.96 | 365 | 1.0306 | 0.8 |
| 0.0716 | 20.0 | 385 | 1.2653 | 0.7167 |
| 0.0643 | 20.99 | 404 | 0.9894 | 0.7833 |
| 0.0517 | 21.97 | 423 | 1.0439 | 0.7667 |
| 0.0597 | 22.96 | 442 | 1.1470 | 0.7667 |
| 0.0533 | 24.0 | 462 | 1.0848 | 0.7833 |
| 0.0529 | 24.99 | 481 | 1.1481 | 0.75 |
| 0.0524 | 25.97 | 500 | 1.1322 | 0.7333 |
| 0.0525 | 26.96 | 519 | 1.1868 | 0.7333 |
| 0.0517 | 28.0 | 539 | 1.1561 | 0.7167 |
| 0.0309 | 28.99 | 558 | 1.0562 | 0.7833 |
| 0.0403 | 29.97 | 577 | 1.2901 | 0.7333 |
| 0.0392 | 30.96 | 596 | 1.1295 | 0.7667 |
| 0.0404 | 32.0 | 616 | 1.1198 | 0.7667 |
| 0.0381 | 32.99 | 635 | 1.2986 | 0.7167 |
| 0.0262 | 33.97 | 654 | 1.1655 | 0.75 |
| 0.0354 | 34.96 | 673 | 1.1223 | 0.7833 |
| 0.0224 | 36.0 | 693 | 1.1679 | 0.7833 |
| 0.0244 | 36.99 | 712 | 1.0999 | 0.8167 |
| 0.0368 | 37.97 | 731 | 1.1213 | 0.7833 |
| 0.0199 | 38.96 | 750 | 1.1003 | 0.8 |
| 0.028 | 39.48 | 760 | 1.0989 | 0.8 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mradermacher/MAmmoTH2-8x7B-GGUF | mradermacher | 2024-05-24T01:04:38Z | 54 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:TIGER-Lab/WebInstructSub",
"base_model:TIGER-Lab/MAmmoTH2-8x7B",
"base_model:quantized:TIGER-Lab/MAmmoTH2-8x7B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T20:30:53Z | ---
base_model: TIGER-Lab/MAmmoTH2-8x7B
datasets:
- TIGER-Lab/WebInstructSub
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF/resolve/main/MAmmoTH2-8x7B.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Deep-Miqu-103B-i1-GGUF | mradermacher | 2024-05-24T01:04:26Z | 42 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-05-20T21:35:00Z | ---
base_model: jukofyork/Deep-Miqu-103B
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/jukofyork/Deep-Miqu-103B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Deep-Miqu-103B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ1_S.gguf) | i1-IQ1_S | 21.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ1_M.gguf) | i1-IQ1_M | 23.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 30.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ2_S.gguf) | i1-IQ2_S | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ2_M.gguf) | i1-IQ2_M | 34.8 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q2_K.gguf) | i1-Q2_K | 38.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 39.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 42.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 44.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ3_S.gguf) | i1-IQ3_S | 44.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ3_M.gguf) | i1-IQ3_M | 46.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 49.7 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 54.2 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 55.2 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 58.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 58.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 62.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 71.1 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 73.0 | |
| [PART 1](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Deep-Miqu-103B-i1-GGUF/resolve/main/Deep-Miqu-103B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 84.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MAmmoTH2-8x7B-Plus-GGUF | mradermacher | 2024-05-24T01:04:21Z | 34 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:TIGER-Lab/WebInstructSub",
"base_model:TIGER-Lab/MAmmoTH2-8x7B-Plus",
"base_model:quantized:TIGER-Lab/MAmmoTH2-8x7B-Plus",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T22:42:53Z | ---
base_model: TIGER-Lab/MAmmoTH2-8x7B-Plus
datasets:
- TIGER-Lab/WebInstructSub
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q2_K.gguf) | Q2_K | 17.4 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.IQ3_XS.gguf) | IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q3_K_S.gguf) | Q3_K_S | 20.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.IQ3_M.gguf) | IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q3_K_L.gguf) | Q3_K_L | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.IQ4_XS.gguf) | IQ4_XS | 25.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q5_K_S.gguf) | Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q5_K_M.gguf) | Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q6_K.gguf) | Q6_K | 38.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-Plus-GGUF/resolve/main/MAmmoTH2-8x7B-Plus.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3_SnowStorm_4x8B-i1-GGUF | mradermacher | 2024-05-24T01:04:17Z | 130 | 4 | transformers | [
"transformers",
"gguf",
"moe",
"en",
"base_model:xxx777xxxASD/L3_SnowStorm_4x8B",
"base_model:quantized:xxx777xxxASD/L3_SnowStorm_4x8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-20T23:25:51Z | ---
base_model: xxx777xxxASD/L3_SnowStorm_4x8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/xxx777xxxASD/L3_SnowStorm_4x8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ1_M.gguf) | i1-IQ1_M | 6.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q2_K.gguf) | i1-Q2_K | 9.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ3_M.gguf) | i1-IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q4_0.gguf) | i1-Q4_0 | 14.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3_SnowStorm_4x8B-i1-GGUF/resolve/main/L3_SnowStorm_4x8B.i1-Q6_K.gguf) | i1-Q6_K | 20.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MAmmoTH2-8x7B-i1-GGUF | mradermacher | 2024-05-24T01:04:10Z | 38 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:TIGER-Lab/WebInstructSub",
"base_model:TIGER-Lab/MAmmoTH2-8x7B",
"base_model:quantized:TIGER-Lab/MAmmoTH2-8x7B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-21T09:28:35Z | ---
base_model: TIGER-Lab/MAmmoTH2-8x7B
datasets:
- TIGER-Lab/WebInstructSub
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MAmmoTH2-8x7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/MAmmoTH2-8x7B-i1-GGUF/resolve/main/MAmmoTH2-8x7B.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3-70Bx2-MOE-i1-GGUF | mradermacher | 2024-05-24T01:04:04Z | 17 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cloudyu/Llama-3-70Bx2-MOE",
"base_model:quantized:cloudyu/Llama-3-70Bx2-MOE",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-21T13:05:51Z | ---
base_model: cloudyu/Llama-3-70Bx2-MOE
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/cloudyu/Llama-3-70Bx2-MOE
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ1_S.gguf) | i1-IQ1_S | 26.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ1_M.gguf) | i1-IQ1_M | 29.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 33.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ2_XS.gguf) | i1-IQ2_XS | 37.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ2_S.gguf) | i1-IQ2_S | 39.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ2_M.gguf) | i1-IQ2_M | 42.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q2_K.gguf) | i1-Q2_K | 46.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 49.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 52.3 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 55.2 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 55.2 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 56.6 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 61.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 66.3 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 68.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 72.1 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 72.5 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 76.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 87.5 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 90.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Llama-3-70Bx2-MOE-i1-GGUF/resolve/main/Llama-3-70Bx2-MOE.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 104.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF | mradermacher | 2024-05-24T01:03:40Z | 136 | 5 | transformers | [
"transformers",
"gguf",
"en",
"base_model:failspy/Smaug-Llama-3-70B-Instruct-abliterated-v3",
"base_model:quantized:failspy/Smaug-Llama-3-70B-Instruct-abliterated-v3",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-22T17:20:38Z | ---
base_model: failspy/Smaug-Llama-3-70B-Instruct-abliterated-v3
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/failspy/Smaug-Llama-3-70B-Instruct-abliterated-v3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Smaug-Llama-3-70B-Instruct-abliterated-v3-i1-GGUF/resolve/main/Smaug-Llama-3-70B-Instruct-abliterated-v3.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/beberik_-_Nyxene-v1-11B-gguf | RichardErkhov | 2024-05-24T01:02:20Z | 5 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-23T20:58:33Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nyxene-v1-11B - GGUF
- Model creator: https://huggingface.co/beberik/
- Original model: https://huggingface.co/beberik/Nyxene-v1-11B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Nyxene-v1-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q2_K.gguf) | Q2_K | 3.73GB |
| [Nyxene-v1-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [Nyxene-v1-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [Nyxene-v1-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [Nyxene-v1-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [Nyxene-v1-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q3_K.gguf) | Q3_K | 4.84GB |
| [Nyxene-v1-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [Nyxene-v1-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [Nyxene-v1-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [Nyxene-v1-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q4_0.gguf) | Q4_0 | 5.66GB |
| [Nyxene-v1-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Nyxene-v1-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [Nyxene-v1-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q4_K.gguf) | Q4_K | 6.02GB |
| [Nyxene-v1-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [Nyxene-v1-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q4_1.gguf) | Q4_1 | 6.27GB |
| [Nyxene-v1-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q5_0.gguf) | Q5_0 | 6.89GB |
| [Nyxene-v1-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [Nyxene-v1-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q5_K.gguf) | Q5_K | 7.08GB |
| [Nyxene-v1-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [Nyxene-v1-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q5_1.gguf) | Q5_1 | 7.51GB |
| [Nyxene-v1-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q6_K.gguf) | Q6_K | 8.2GB |
| [Nyxene-v1-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/beberik_-_Nyxene-v1-11B-gguf/blob/main/Nyxene-v1-11B.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: Nyxene-v1-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.28
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v1-11B
name: Open LLM Leaderboard
---
## Description
This repo contains bf16 files of Nyxene-v1-11B. Same as the [previous version](https://huggingface.co/beberik/Nyxene-11B) but I used newer models and tried to repeat what I experimented with when there were older models.
## Model used
- [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
- [openaccess-ai-collective/DPOpenHermes-7B](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B)
- [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA)
- [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7)
- [argilla/notus-7b-v1](https://huggingface.co/argilla/notus-7b-v1)
I added a new model because after the same action but using zephyr and dolphin the model turned out to be more creative.
## Prompt template
The best one after further testing is this one:
```
<|system|>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
<|user|>
{prompt}
<|assistant|>
```
## The secret sauce
loyal-piano with 1% of notus :
```
slices:
- sources:
- model: chargoddard/loyal-piano-m7
layer_range: [0, 48]
- model: argilla/notus-7b-v1
layer_range: [0, 48]
merge_method: slerp
base_model: argilla/notus-7b-v1
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.99 # fallback for rest of tensors
dtype: bfloat16
```
loyal-piano-juanako-11B :
```
slices:
- sources:
- model: fblgit/juanako-7b-UNA
layer_range: [0, 24]
- sources:
- model: chargoddard/loyal-piano-m7
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Starling-DPOHermes-11B :
```
slices:
- sources:
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 24]
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Nyxene-11B :
```
slices:
- sources:
- model: loyal-piano-juanako-11B
layer_range: [0, 48]
- model: Starling-NeuralHermes-11B
layer_range: [0, 48]
merge_method: slerp
base_model: dolphin-juanako-11B
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-v1-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.58|
|AI2 Reasoning Challenge (25-Shot)|67.49|
|HellaSwag (10-Shot) |84.52|
|MMLU (5-Shot) |65.12|
|TruthfulQA (0-shot) |57.28|
|Winogrande (5-shot) |79.01|
|GSM8k (5-shot) |52.08|
|
TwT-6/cr-model-v1 | TwT-6 | 2024-05-24T01:02:20Z | 44 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T07:41:31Z | ---
license: cc-by-4.0
---
My model is a state-of-the-art language processing AI designed to understand and generate human-like text. It leverages deep learning algorithms to engage in a wide range of language tasks, providing users with information, recommendations, and even casual conversation. With a broad knowledge base and nuanced understanding of context, my capabilities enable me to assist with various inquiries and perform complex language-based tasks effectively. |
Klarly/multilingual-MT_FR-ES-IT-PT-RO_CAS-NLP | Klarly | 2024-05-24T00:48:23Z | 31 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-roa",
"base_model:finetune:Helsinki-NLP/opus-mt-en-roa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-23T16:45:00Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-roa
tags:
- generated_from_trainer
model-index:
- name: multilingual-MT_FR-ES-IT-PT-RO_CAS-NLP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual-MT_FR-ES-IT-PT-RO_CAS-NLP
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-roa](https://huggingface.co/Helsinki-NLP/opus-mt-en-roa) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hgnoi/PNtvstHsvaxTN2DP | hgnoi | 2024-05-24T00:42:03Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T00:38:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zoyd/CohereForAI_aya-23-35B-8_0bpw_exl2 | Zoyd | 2024-05-24T00:39:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-24T00:21:28Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_2bpw_exl2)**</center> | <center>14760 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_5bpw_exl2)**</center> | <center>16011 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_0bpw_exl2)**</center> | <center>18096 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_5bpw_exl2)**</center> | <center>20178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_75bpw_exl2)**</center> | <center>21213 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_0bpw_exl2)**</center> | <center>22266 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_25bpw_exl2)**</center> | <center>23307 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-5_0bpw_exl2)**</center> | <center>26431 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_0bpw_exl2)**</center> | <center>31014 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_5bpw_exl2)**</center> | <center>33112 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-8_0bpw_exl2)**</center> | <center>37423 MB</center> | <center>8</center> |
# Model Card for Aya-23-35B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 35 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-35B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
} |
Zoyd/CohereForAI_aya-23-35B-5_0bpw_exl2 | Zoyd | 2024-05-24T00:37:55Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-23T22:27:56Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_2bpw_exl2)**</center> | <center>14760 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_5bpw_exl2)**</center> | <center>16011 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_0bpw_exl2)**</center> | <center>18096 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_5bpw_exl2)**</center> | <center>20178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_75bpw_exl2)**</center> | <center>21213 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_0bpw_exl2)**</center> | <center>22266 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_25bpw_exl2)**</center> | <center>23307 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-5_0bpw_exl2)**</center> | <center>26431 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_0bpw_exl2)**</center> | <center>31014 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_5bpw_exl2)**</center> | <center>33112 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-8_0bpw_exl2)**</center> | <center>37423 MB</center> | <center>8</center> |
# Model Card for Aya-23-35B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 35 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-35B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
} |
GTsuya/magic_moonarts_pony | GTsuya | 2024-05-24T00:34:49Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:GraydientPlatformAPI/autism-pony",
"base_model:adapter:GraydientPlatformAPI/autism-pony",
"license:mit",
"region:us"
] | text-to-image | 2024-05-24T00:33:27Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, Bustier,
from_above , cowboy_shot, rating_safe, <lora:magic_moonarts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00005-2600470274.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, Shorts, from_side
, cowboy_shot, rating_safe, <lora:magic_moonarts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00007-2393379456.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female,
dark-skinned_female, Bodysuit, sideways , lower_body, rating_safe,
<lora:magic_moonarts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00017-2795844222.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, very_dark_skin,
Culottes, from_side , upper_body, rating_questionable,
<lora:magic_moonarts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00084-3581396895.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, very_dark_skin,
Halter Top, from_above , feet_out_of_frame, rating_questionable,
<lora:magic_moonarts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00090-1426675186.png
- text: >-
cartoon, score_9, score_8_up, score_7_up, mature_female, very_dark_skin,
Scarves, dutch_angle , cowboy_shot, rating_explicit,
<lora:magic_moonarts_pony:1>
parameters:
negative_prompt: >-
score_6, score_5, score_4, ugly face, ugly eyes, realistic, monochrome,
white and black
output:
url: images/00164-1578228206.png
base_model: GraydientPlatformAPI/autism-pony
instance_prompt: null
license: mit
---
# magic_moonarts_pony
<Gallery />
## Model description
This LoRA model has been trained with Kohya SS using Magic Moon Arts's artworks on Autism Mix SDXL checkpoint. Obtained graphics could be really close the original art style. This LoRA model could be use for cartoon representation of sexy women
## Download model
Weights for this model are available in Safetensors format.
[Download](/GTsuya/magic_moonarts_pony/tree/main) them in the Files & versions tab.
|
Augusto777/vit-base-patch16-224-RU3-10 | Augusto777 | 2024-05-24T00:33:34Z | 218 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-24T00:24:24Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-RU3-10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7833333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-RU3-10
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6241
- Accuracy: 0.7833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3698 | 0.99 | 19 | 1.1845 | 0.65 |
| 1.1232 | 1.97 | 38 | 0.9393 | 0.65 |
| 0.8168 | 2.96 | 57 | 0.9117 | 0.6333 |
| 0.5992 | 4.0 | 77 | 0.8330 | 0.7333 |
| 0.4258 | 4.99 | 96 | 0.7471 | 0.7 |
| 0.3283 | 5.97 | 115 | 0.6241 | 0.7833 |
| 0.2543 | 6.96 | 134 | 0.5916 | 0.7833 |
| 0.2345 | 8.0 | 154 | 0.6783 | 0.7833 |
| 0.2027 | 8.99 | 173 | 0.6577 | 0.7833 |
| 0.1733 | 9.87 | 190 | 0.6589 | 0.7833 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mayabedge/wav2vec2-large-xls-r-300m-amharic-maya2 | mayabedge | 2024-05-24T00:31:16Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T00:18:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leafspark/SFR-Iterative-DPO-LLaMA-3-8B-R-lora | leafspark | 2024-05-24T00:28:22Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"mergekit",
"peft",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T00:27:04Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- peft
license: llama3
---
# SFR-Iterative-DPO-LLaMA-3-8B-R LoRA Model
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from SFR-Iterative-DPO-LLaMA-3-8B-R and uses Meta-Llama-3-8B as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora Meta-Llama-3-8B SFR-Iterative-DPO-LLaMA-3-8B-R OUTPUT_PATH --rank=32
``` |
Zoyd/CohereForAI_aya-23-35B-6_5bpw_exl2 | Zoyd | 2024-05-24T00:26:04Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-23T23:42:12Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_2bpw_exl2)**</center> | <center>14760 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_5bpw_exl2)**</center> | <center>16011 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_0bpw_exl2)**</center> | <center>18096 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_5bpw_exl2)**</center> | <center>20178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_75bpw_exl2)**</center> | <center>21213 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_0bpw_exl2)**</center> | <center>22266 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_25bpw_exl2)**</center> | <center>23307 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-5_0bpw_exl2)**</center> | <center>26431 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_0bpw_exl2)**</center> | <center>31014 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_5bpw_exl2)**</center> | <center>33112 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-8_0bpw_exl2)**</center> | <center>37423 MB</center> | <center>8</center> |
# Model Card for Aya-23-35B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 35 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-35B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
} |
Zoyd/CohereForAI_aya-23-35B-6_0bpw_exl2 | Zoyd | 2024-05-24T00:26:01Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-23T23:05:09Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_2bpw_exl2)**</center> | <center>14760 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_5bpw_exl2)**</center> | <center>16011 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_0bpw_exl2)**</center> | <center>18096 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_5bpw_exl2)**</center> | <center>20178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_75bpw_exl2)**</center> | <center>21213 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_0bpw_exl2)**</center> | <center>22266 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_25bpw_exl2)**</center> | <center>23307 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-5_0bpw_exl2)**</center> | <center>26431 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_0bpw_exl2)**</center> | <center>31014 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_5bpw_exl2)**</center> | <center>33112 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-8_0bpw_exl2)**</center> | <center>37423 MB</center> | <center>8</center> |
# Model Card for Aya-23-35B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 35 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-35B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
} |
Zoyd/CohereForAI_aya-23-35B-2_5bpw_exl2 | Zoyd | 2024-05-24T00:25:05Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-23T19:00:37Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_2bpw_exl2)**</center> | <center>14760 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_5bpw_exl2)**</center> | <center>16011 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_0bpw_exl2)**</center> | <center>18096 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_5bpw_exl2)**</center> | <center>20178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_75bpw_exl2)**</center> | <center>21213 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_0bpw_exl2)**</center> | <center>22266 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_25bpw_exl2)**</center> | <center>23307 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-5_0bpw_exl2)**</center> | <center>26431 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_0bpw_exl2)**</center> | <center>31014 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_5bpw_exl2)**</center> | <center>33112 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-8_0bpw_exl2)**</center> | <center>37423 MB</center> | <center>8</center> |
# Model Card for Aya-23-35B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 35 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-35B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
} |
Zoyd/CohereForAI_aya-23-35B-2_2bpw_exl2 | Zoyd | 2024-05-24T00:24:56Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-23T18:28:13Z | ---
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
**Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_2bpw_exl2)**</center> | <center>14760 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-2_5bpw_exl2)**</center> | <center>16011 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_0bpw_exl2)**</center> | <center>18096 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_5bpw_exl2)**</center> | <center>20178 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-3_75bpw_exl2)**</center> | <center>21213 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_0bpw_exl2)**</center> | <center>22266 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-4_25bpw_exl2)**</center> | <center>23307 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-5_0bpw_exl2)**</center> | <center>26431 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_0bpw_exl2)**</center> | <center>31014 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-6_5bpw_exl2)**</center> | <center>33112 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/CohereForAI_aya-23-35B-8_0bpw_exl2)**</center> | <center>37423 MB</center> | <center>8</center> |
# Model Card for Aya-23-35B
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 35 billion parameters
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-35B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
} |
T3Q-LLM/T3Q-LLM2-FP-v1.0 | T3Q-LLM | 2024-05-24T00:23:29Z | 45 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-07T23:58:37Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
hf-causal-experimental (pretrained=T3Q-LLM/T3Q-LLM2-FP-v1.0,use_accelerate=true,trust_remote_code=true), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5976|± |0.0131|
| | |macro_f1|0.5224|± |0.0136|
|kobest_copa | 0|acc |0.8190|± |0.0122|
| | |macro_f1|0.8189|± |0.0122|
|kobest_hellaswag| 0|acc |0.5240|± |0.0224|
| | |acc_norm|0.5740|± |0.0221|
| | |macro_f1|0.5214|± |0.0224|
|kobest_sentineg | 0|acc |0.7809|± |0.0208|
| | |macro_f1|0.7786|± |0.0211|
|
crazydevlegend/Qwen-Qwen1.5-0.5B-1716509910 | crazydevlegend | 2024-05-24T00:19:09Z | 147 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T00:18:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hgnoi/svb7DgXPcZi51zzj | hgnoi | 2024-05-24T00:16:25Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-24T00:14:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf | RichardErkhov | 2024-05-24T00:12:52Z | 9 | 0 | null | [
"gguf",
"arxiv:2404.17733",
"endpoints_compatible",
"region:us"
] | null | 2024-05-23T21:51:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Swallow-MS-7b-v0.1 - GGUF
- Model creator: https://huggingface.co/tokyotech-llm/
- Original model: https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Swallow-MS-7b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q2_K.gguf) | Q2_K | 2.58GB |
| [Swallow-MS-7b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.86GB |
| [Swallow-MS-7b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ3_S.gguf) | IQ3_S | 3.02GB |
| [Swallow-MS-7b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.0GB |
| [Swallow-MS-7b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ3_M.gguf) | IQ3_M | 3.11GB |
| [Swallow-MS-7b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q3_K.gguf) | Q3_K | 3.33GB |
| [Swallow-MS-7b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.33GB |
| [Swallow-MS-7b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.61GB |
| [Swallow-MS-7b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.73GB |
| [Swallow-MS-7b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_0.gguf) | Q4_0 | 3.88GB |
| [Swallow-MS-7b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.93GB |
| [Swallow-MS-7b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.91GB |
| [Swallow-MS-7b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_K.gguf) | Q4_K | 4.13GB |
| [Swallow-MS-7b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.13GB |
| [Swallow-MS-7b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q4_1.gguf) | Q4_1 | 4.3GB |
| [Swallow-MS-7b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_0.gguf) | Q5_0 | 4.72GB |
| [Swallow-MS-7b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.72GB |
| [Swallow-MS-7b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_K.gguf) | Q5_K | 4.84GB |
| [Swallow-MS-7b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.84GB |
| [Swallow-MS-7b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q5_1.gguf) | Q5_1 | 5.13GB |
| [Swallow-MS-7b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q6_K.gguf) | Q6_K | 5.6GB |
| [Swallow-MS-7b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-MS-7b-v0.1-gguf/blob/main/Swallow-MS-7b-v0.1.Q8_0.gguf) | Q8_0 | 7.26GB |
Original model description:
---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
model_type: mistral
license: apache-2.0
---
# Swallow-MS-7b-v0.1
Our Swallow-MS-7b-v0.1 model has undergone continual pre-training from the Mistral-7B-v0.1, primarily with the addition of Japanese language data.
# Model Release Updates
We are excited to share the release schedule for our latest models:
- **April 26, 2024**: Released the [Swallow-MS-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-instruct-v0.1)
- **March 11, 2024**: Released the [Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1)

This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
## Model Details
* **Model type**: Please refer to Mistral technical report for details on the model architecture.
* **Language(s)**: Japanese English
* **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Base Model Performance
### Japanese tasks
|Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|Average|
|---------------------------|-------|---------|-------|-------|-------|------|------------|------------|------|-----|
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot||
| CyberAgentLM2-7B |7B| 0.2198 | 0.5047 | 0.5066 | 0.7799 | 0.0233 | 0.0600 | 0.2345 | 0.1499 | 0.3098 |
| Llama 2 |7B| 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 | 0.3201 |
| japanese-stablelm-base-beta-7b|7B| 0.3610 | 0.4478 | 0.4432 | 0.8318 | 0.2195 | 0.0720 | 0.1946 | 0.1226 | 0.3366 |
| japanese-stablelm-base-ja_vocab-beta-7b|7B| 0.2172 | 0.4482 | 0.4309 | 0.8202 | 0.0757 | 0.0520 | 0.1601 | 0.1453 | 0.2937 |
| ELYZA-japanese-Llama-2-7b|7B| 0.5791 | 0.4703 | 0.4019 | 0.8226 | 0.1312 | 0.0600 | 0.1795 | 0.1289 | 0.3467 |
| ELYZA-japanese-Llama-2-7b-fast|7B| 0.5308 | 0.4330 | 0.3898 | 0.8131 | 0.1289 | 0.0720 | 0.1678 | 0.1143 | 0.3312 |
| youri-7b (base) |7B| 0.4620 | 0.4776 | 0.4999 | 0.8506 | 0.1957 | 0.0640 | 0.2671 | **0.1971** | 0.3768 |
| Swallow-7b |7B| 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 | 0.3940 |
| Swallow-7b-plus |7B| 0.5478 | **0.5493** | **0.6030** | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 | 0.4090 |
| Qwen-7B |7B| 0.7712 | 0.4234 | 0.2376 | 0.8594 | 0.1371 | 0.2160 | 0.1689 | 0.1801 | 0.3742 |
| nekomata-7b |7B| 0.7417 | 0.4928 | 0.5022 | 0.8707 | 0.1676 | 0.1240 | **0.2673** | 0.1815 | 0.4185 |
| Mistral-7B-v0.1 |7B| 0.7301 | 0.4245 | 0.2722 | 0.8563 | 0.2006 | 0.1760 | 0.1405 | 0.1733 | 0.3717 |
| japanese-stablelm-base-gamma-7b|7B| 0.7364 | 0.4643 | 0.5568 | **0.8910** | **0.2293** | 0.1680 | 0.2390 | 0.1561 | 0.4301 |
| Swallow-MS-7b-v0.1 |7B| **0.8570** | 0.4915 | 0.5519 | 0.8802 | 0.1988 | **0.2240** | 0.2494 | 0.1667 | **0.4524** |
### English tasks
|Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|Average|
|---|---|---|---|---|---|---|---|---|
| | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot||
| CyberAgentLM2-7B |7B| 0.2860 | 0.3496 | 0.5003 | 0.3510 | 0.8581 | 0.0705 | 0.4026 |
| Llama 2 |7B| 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 | 0.4895 |
| japanese-stablelm-base-beta-7b|7B| 0.3620 | 0.5903 | 0.5707 | 0.2992 | 0.8994 | 0.1198 | 0.4736 |
| japanese-stablelm-base-ja_vocab-beta-7b|7B| 0.3520 | 0.5549 | 0.5644 | 0.3079 | 0.8942 | 0.0538 | 0.4545 |
| ELYZA-japanese-Llama-2-7b|7B| 0.3400 | 0.5875 | 0.5595 | 0.2721 | 0.8989 | 0.1638 | 0.4703 |
| ELYZA-japanese-Llama-2-7b-fast|7B| 0.3280 | 0.5817 | 0.5530 | 0.2605 | 0.8989 | 0.1425 | 0.4608 |
| youri-7b (base) |7B| 0.3400 | 0.5257 | 0.5540 | 0.3297 | 0.8938 | 0.0963 | 0.4566 |
| Swallow-7b |7B| 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 | 0.4399 |
| Swallow-7b-plus |7B| 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 | 0.4370 |
| Qwen-7B |7B| 0.3640 | 0.5695 | 0.5787 | **0.3799** | 0.8933 | **0.4617** | 0.5412 |
| nekomata-7b |7B| 0.3340 | 0.4371 | 0.5340 | 0.2933 | 0.8766 | 0.1531 | 0.4380 |
| Mistral-7B-v0.1 |7B| **0.3660** | **0.7050** | **0.6264** | **0.3799** | **0.9157** | 0.3533 | **0.5577** |
| japanese-stablelm-base-gamma-7b|7B| 0.3240 | 0.5745 | 0.5739 | 0.3546 | 0.8976 | 0.1911 | 0.4860 |
| Swallow-MS-7b-v0.1 |7B| 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 | 0.5042 |
### Code generation tasks
|Model|Size|JHumanEval|HumanEval|
|---|---|---|---|
| | |pass@1|pass@1|
| CyberAgentLM2-7B |7B|0.0634|0.0756|
| Llama 2 |7B|0.1152|0.1378|
| japanese-stablelm-base-beta-7b|7B|0.1018|0.1280|
| japanese-stablelm-base-ja_vocab-beta-7b|7B|0.0896|0.1122|
| ELYZA-japanese-Llama-2-7b|7B|0.0287|0.0427|
| ELYZA-japanese-Llama-2-7b-fast|7B| 0.0000 |0.0037|
| youri-7b (base) |7B|0.0829|0.0982|
| Swallow-7b |7B|0.0183|0.0183|
| Swallow-7b-plus |7B| 0.0061|0.0037|
| Qwen-7B |7B|0.1701|0.1805|
| nekomata-7b |7B|0.0988|0.1402|
| Mistral-7B-v0.1 |7B|**0.2555**|**0.2933**|
| japanese-stablelm-base-gamma-7b|7B|0.1823|0.1915|
| Swallow-MS-7b-v0.1 |7B|0.2305|0.2768|
## Evaluation Benchmarks
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022])
- Open-ended question answering (JEMHopQA [Ishii+, 2023])
- Open-ended question answering (NIILC [Sekine, 2003])
- Machine reading comprehension (JSQuAD [Kurihara+, 2022])
- Automatic summarization (XL-Sum [Hasan+, 2021])
- Machine translation (WMT2020 ja-en [Barrault+, 2020])
- Machine translation (WMT2020 en-ja [Barrault+, 2020])
- Mathematical reasoning (MGSM [Shi+, 2023])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018])
- Open-ended question answering (TriviaQA [Joshi+, 2017])
- Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018])
- Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers+, 2019])
- Mathematical reasoning (GSM8k [Cobbe+, 2021])
### Code evaluation benchmarks
We utilized the Code Generation LM Evaluation Harness [Allal+, 2022] (commit #0261c52). The details are as follows:
- Code generation (HumanEval [Chen+, 2021])
- Code generation in Japanese (JHumanEval [Satoh+, 2024])
## Usage
First install additional dependencies in [requirements.txt](./requirements.txt):
```sh
pip install -r requirements.txt
```
### Use the base model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "tokyotech-llm/Swallow-MS-7b-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
prompt = "東京工業大学の主なキャンパスは、"
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=128,
temperature=0.99,
top_p=0.95,
do_sample=True,
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
```
## Training Datasets
### Continual Pre-Training
The following datasets were used for continual pre-training.
- [Algebraic Stack](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- [Swallow Corpus](https://arxiv.org/abs/2404.17733)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Mistral AI for releasing Mistral 7B v0.1 under an open license for others to build on.
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
## License
apache-2.0
## Authors
Here are the team members:
- From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Hiroki Iida](https://meshidenn.github.io/)
- [Mengsay Loem](https://loem-ms.github.io/)
- [Shota Hirai](https://huggingface.co/Kotemo428)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://twitter.com/stjohn2007)
- From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
|
basmazouaoui/finetuned_camembert_for_job_offers | basmazouaoui | 2024-05-24T00:12:34Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-23T20:30:18Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Finetuned-camem-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetuned-camem-ner
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1080
- Precision: 0.8445
- Recall: 0.8740
- F1: 0.8590
- Accuracy: 0.9793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.2864 | 0.09 | 100 | 1.2891 | 0.0295 | 0.1206 | 0.0474 | 0.8750 |
| 0.8284 | 0.17 | 200 | 0.5688 | 0.0376 | 0.1252 | 0.0579 | 0.8888 |
| 0.374 | 0.26 | 300 | 0.2753 | 0.1477 | 0.2320 | 0.1805 | 0.9366 |
| 0.2215 | 0.35 | 400 | 0.1742 | 0.3205 | 0.3816 | 0.3484 | 0.9584 |
| 0.1447 | 0.43 | 500 | 0.1271 | 0.6077 | 0.7105 | 0.6551 | 0.9735 |
| 0.1183 | 0.52 | 600 | 0.1067 | 0.7066 | 0.7857 | 0.7440 | 0.9773 |
| 0.108 | 0.61 | 700 | 0.0983 | 0.7236 | 0.8071 | 0.7631 | 0.9779 |
| 0.0978 | 0.69 | 800 | 0.0880 | 0.7678 | 0.8224 | 0.7942 | 0.9789 |
| 0.0897 | 0.78 | 900 | 0.0908 | 0.7970 | 0.8432 | 0.8195 | 0.9797 |
| 0.0799 | 0.87 | 1000 | 0.0883 | 0.8052 | 0.8587 | 0.8311 | 0.9799 |
| 0.0868 | 0.95 | 1100 | 0.0832 | 0.8073 | 0.8622 | 0.8338 | 0.9801 |
| 0.0749 | 1.04 | 1200 | 0.0832 | 0.8138 | 0.8651 | 0.8387 | 0.9800 |
| 0.0765 | 1.13 | 1300 | 0.0844 | 0.8139 | 0.8689 | 0.8405 | 0.9800 |
| 0.0712 | 1.21 | 1400 | 0.0835 | 0.8262 | 0.8636 | 0.8445 | 0.9800 |
| 0.0678 | 1.3 | 1500 | 0.0838 | 0.8228 | 0.8687 | 0.8451 | 0.9801 |
| 0.0699 | 1.39 | 1600 | 0.0850 | 0.8212 | 0.8714 | 0.8455 | 0.9800 |
| 0.0731 | 1.47 | 1700 | 0.0809 | 0.8272 | 0.8709 | 0.8485 | 0.9800 |
| 0.0704 | 1.56 | 1800 | 0.0818 | 0.8400 | 0.8697 | 0.8546 | 0.9803 |
| 0.0749 | 1.65 | 1900 | 0.0820 | 0.8330 | 0.8726 | 0.8523 | 0.9802 |
| 0.0723 | 1.73 | 2000 | 0.0814 | 0.8423 | 0.8709 | 0.8563 | 0.9802 |
| 0.0737 | 1.82 | 2100 | 0.0814 | 0.8312 | 0.8737 | 0.8519 | 0.9801 |
| 0.073 | 1.91 | 2200 | 0.0821 | 0.8347 | 0.8769 | 0.8553 | 0.9799 |
| 0.0617 | 1.99 | 2300 | 0.0830 | 0.8375 | 0.8760 | 0.8563 | 0.9801 |
| 0.0607 | 2.08 | 2400 | 0.0863 | 0.8295 | 0.8803 | 0.8541 | 0.9803 |
| 0.0578 | 2.17 | 2500 | 0.0849 | 0.8365 | 0.8797 | 0.8575 | 0.9803 |
| 0.0546 | 2.25 | 2600 | 0.0854 | 0.8376 | 0.8785 | 0.8576 | 0.9802 |
| 0.0634 | 2.34 | 2700 | 0.0832 | 0.8375 | 0.8764 | 0.8565 | 0.9801 |
| 0.058 | 2.43 | 2800 | 0.0852 | 0.8405 | 0.8748 | 0.8573 | 0.9802 |
| 0.0616 | 2.51 | 2900 | 0.0851 | 0.8378 | 0.8796 | 0.8582 | 0.9800 |
| 0.0585 | 2.6 | 3000 | 0.0845 | 0.8434 | 0.8785 | 0.8606 | 0.9800 |
| 0.0542 | 2.69 | 3100 | 0.0847 | 0.8471 | 0.8773 | 0.8619 | 0.9801 |
| 0.0617 | 2.77 | 3200 | 0.0869 | 0.8396 | 0.8765 | 0.8577 | 0.9799 |
| 0.0634 | 2.86 | 3300 | 0.0828 | 0.8338 | 0.8773 | 0.8550 | 0.9796 |
| 0.0593 | 2.95 | 3400 | 0.0855 | 0.8360 | 0.8789 | 0.8569 | 0.9798 |
| 0.0486 | 3.03 | 3500 | 0.0888 | 0.8439 | 0.8781 | 0.8606 | 0.9801 |
| 0.0549 | 3.12 | 3600 | 0.0886 | 0.8444 | 0.8793 | 0.8615 | 0.9798 |
| 0.0499 | 3.21 | 3700 | 0.0925 | 0.8462 | 0.8771 | 0.8613 | 0.9800 |
| 0.0484 | 3.29 | 3800 | 0.0913 | 0.8449 | 0.8773 | 0.8608 | 0.9798 |
| 0.049 | 3.38 | 3900 | 0.0927 | 0.8409 | 0.8774 | 0.8588 | 0.9796 |
| 0.05 | 3.47 | 4000 | 0.0900 | 0.8468 | 0.8780 | 0.8621 | 0.9800 |
| 0.0456 | 3.55 | 4100 | 0.0904 | 0.8464 | 0.8787 | 0.8623 | 0.9801 |
| 0.051 | 3.64 | 4200 | 0.0911 | 0.8411 | 0.8778 | 0.8591 | 0.9798 |
| 0.0507 | 3.73 | 4300 | 0.0921 | 0.8457 | 0.8768 | 0.8610 | 0.9797 |
| 0.0526 | 3.81 | 4400 | 0.0888 | 0.8453 | 0.8774 | 0.8610 | 0.9801 |
| 0.0494 | 3.9 | 4500 | 0.0892 | 0.8440 | 0.8785 | 0.8609 | 0.9800 |
| 0.0513 | 3.99 | 4600 | 0.0901 | 0.8392 | 0.8811 | 0.8597 | 0.9796 |
| 0.0479 | 4.07 | 4700 | 0.0914 | 0.8461 | 0.8781 | 0.8618 | 0.9798 |
| 0.0408 | 4.16 | 4800 | 0.0938 | 0.8518 | 0.8724 | 0.8620 | 0.9797 |
| 0.0446 | 4.25 | 4900 | 0.0926 | 0.8475 | 0.8766 | 0.8618 | 0.9797 |
| 0.0425 | 4.33 | 5000 | 0.0927 | 0.8434 | 0.8762 | 0.8595 | 0.9795 |
| 0.0428 | 4.42 | 5100 | 0.0966 | 0.8473 | 0.8788 | 0.8628 | 0.9799 |
| 0.045 | 4.51 | 5200 | 0.0941 | 0.8428 | 0.8787 | 0.8604 | 0.9795 |
| 0.0472 | 4.59 | 5300 | 0.0894 | 0.8436 | 0.8757 | 0.8593 | 0.9794 |
| 0.0436 | 4.68 | 5400 | 0.0961 | 0.8464 | 0.8755 | 0.8607 | 0.9800 |
| 0.0466 | 4.77 | 5500 | 0.0947 | 0.8451 | 0.8767 | 0.8606 | 0.9797 |
| 0.0438 | 4.85 | 5600 | 0.0951 | 0.8398 | 0.8779 | 0.8584 | 0.9795 |
| 0.0444 | 4.94 | 5700 | 0.0965 | 0.8431 | 0.8767 | 0.8596 | 0.9797 |
| 0.0444 | 5.03 | 5800 | 0.0929 | 0.8421 | 0.8780 | 0.8597 | 0.9798 |
| 0.0382 | 5.11 | 5900 | 0.0983 | 0.8460 | 0.8772 | 0.8613 | 0.9796 |
| 0.0388 | 5.2 | 6000 | 0.0979 | 0.8406 | 0.8806 | 0.8601 | 0.9797 |
| 0.0434 | 5.29 | 6100 | 0.0963 | 0.8463 | 0.8783 | 0.8620 | 0.9795 |
| 0.038 | 5.37 | 6200 | 0.0977 | 0.8457 | 0.8774 | 0.8612 | 0.9795 |
| 0.0406 | 5.46 | 6300 | 0.0970 | 0.8454 | 0.8780 | 0.8614 | 0.9796 |
| 0.0415 | 5.55 | 6400 | 0.0971 | 0.8442 | 0.8769 | 0.8602 | 0.9795 |
| 0.037 | 5.63 | 6500 | 0.1001 | 0.8448 | 0.8771 | 0.8607 | 0.9794 |
| 0.0375 | 5.72 | 6600 | 0.1000 | 0.8448 | 0.8744 | 0.8593 | 0.9794 |
| 0.0414 | 5.81 | 6700 | 0.0955 | 0.8478 | 0.8745 | 0.8609 | 0.9794 |
| 0.0422 | 5.89 | 6800 | 0.0966 | 0.8482 | 0.8746 | 0.8612 | 0.9794 |
| 0.04 | 5.98 | 6900 | 0.0995 | 0.8410 | 0.8776 | 0.8589 | 0.9795 |
| 0.0367 | 6.07 | 7000 | 0.1008 | 0.8460 | 0.8757 | 0.8606 | 0.9795 |
| 0.0385 | 6.15 | 7100 | 0.1025 | 0.8428 | 0.8766 | 0.8593 | 0.9793 |
| 0.039 | 6.24 | 7200 | 0.1003 | 0.8424 | 0.8766 | 0.8592 | 0.9794 |
| 0.0344 | 6.33 | 7300 | 0.1047 | 0.8421 | 0.8784 | 0.8599 | 0.9794 |
| 0.0346 | 6.41 | 7400 | 0.1022 | 0.8419 | 0.8780 | 0.8596 | 0.9793 |
| 0.0379 | 6.5 | 7500 | 0.0978 | 0.8467 | 0.8772 | 0.8617 | 0.9797 |
| 0.0358 | 6.59 | 7600 | 0.1018 | 0.8446 | 0.8767 | 0.8603 | 0.9792 |
| 0.0363 | 6.67 | 7700 | 0.1001 | 0.8432 | 0.8768 | 0.8597 | 0.9792 |
| 0.0378 | 6.76 | 7800 | 0.1030 | 0.8456 | 0.8767 | 0.8609 | 0.9794 |
| 0.0403 | 6.85 | 7900 | 0.0971 | 0.8418 | 0.8761 | 0.8586 | 0.9793 |
| 0.0352 | 6.93 | 8000 | 0.1035 | 0.8456 | 0.8757 | 0.8604 | 0.9793 |
| 0.0332 | 7.02 | 8100 | 0.1021 | 0.8450 | 0.8755 | 0.8600 | 0.9792 |
| 0.0371 | 7.11 | 8200 | 0.1032 | 0.8478 | 0.8746 | 0.8610 | 0.9794 |
| 0.034 | 7.19 | 8300 | 0.1037 | 0.8467 | 0.8738 | 0.8600 | 0.9794 |
| 0.033 | 7.28 | 8400 | 0.1037 | 0.8457 | 0.8747 | 0.8599 | 0.9793 |
| 0.0329 | 7.37 | 8500 | 0.1048 | 0.8459 | 0.8751 | 0.8602 | 0.9791 |
| 0.0317 | 7.45 | 8600 | 0.1074 | 0.8441 | 0.8757 | 0.8596 | 0.9792 |
| 0.0319 | 7.54 | 8700 | 0.1056 | 0.8437 | 0.8753 | 0.8592 | 0.9792 |
| 0.0335 | 7.63 | 8800 | 0.1034 | 0.8446 | 0.8736 | 0.8589 | 0.9793 |
| 0.0346 | 7.71 | 8900 | 0.1069 | 0.8461 | 0.8735 | 0.8596 | 0.9792 |
| 0.0342 | 7.8 | 9000 | 0.1031 | 0.8427 | 0.8757 | 0.8589 | 0.9793 |
| 0.0371 | 7.89 | 9100 | 0.1024 | 0.8438 | 0.8747 | 0.8590 | 0.9793 |
| 0.0384 | 7.97 | 9200 | 0.1032 | 0.8472 | 0.8746 | 0.8607 | 0.9795 |
| 0.0308 | 8.06 | 9300 | 0.1070 | 0.8449 | 0.8753 | 0.8598 | 0.9793 |
| 0.0318 | 8.15 | 9400 | 0.1070 | 0.8459 | 0.8738 | 0.8596 | 0.9794 |
| 0.0285 | 8.23 | 9500 | 0.1077 | 0.8474 | 0.8751 | 0.8610 | 0.9794 |
| 0.0334 | 8.32 | 9600 | 0.1066 | 0.8443 | 0.8757 | 0.8598 | 0.9793 |
| 0.0332 | 8.41 | 9700 | 0.1055 | 0.8462 | 0.8747 | 0.8602 | 0.9793 |
| 0.0341 | 8.49 | 9800 | 0.1056 | 0.8442 | 0.8749 | 0.8593 | 0.9793 |
| 0.0304 | 8.58 | 9900 | 0.1066 | 0.8447 | 0.8729 | 0.8586 | 0.9792 |
| 0.0353 | 8.67 | 10000 | 0.1057 | 0.8446 | 0.8741 | 0.8591 | 0.9792 |
| 0.0348 | 8.75 | 10100 | 0.1051 | 0.8443 | 0.8736 | 0.8587 | 0.9792 |
| 0.0326 | 8.84 | 10200 | 0.1047 | 0.8443 | 0.8757 | 0.8597 | 0.9793 |
| 0.0332 | 8.93 | 10300 | 0.1044 | 0.8461 | 0.8732 | 0.8594 | 0.9793 |
| 0.0328 | 9.01 | 10400 | 0.1053 | 0.8438 | 0.8744 | 0.8588 | 0.9792 |
| 0.0318 | 9.1 | 10500 | 0.1072 | 0.8415 | 0.8746 | 0.8577 | 0.9793 |
| 0.0296 | 9.19 | 10600 | 0.1084 | 0.8431 | 0.8743 | 0.8584 | 0.9793 |
| 0.0324 | 9.27 | 10700 | 0.1074 | 0.8448 | 0.8746 | 0.8594 | 0.9794 |
| 0.0326 | 9.36 | 10800 | 0.1080 | 0.8439 | 0.8752 | 0.8593 | 0.9793 |
| 0.0288 | 9.45 | 10900 | 0.1084 | 0.8451 | 0.8739 | 0.8593 | 0.9794 |
| 0.0314 | 9.53 | 11000 | 0.1082 | 0.8450 | 0.8746 | 0.8596 | 0.9794 |
| 0.0292 | 9.62 | 11100 | 0.1084 | 0.8446 | 0.8740 | 0.8590 | 0.9794 |
| 0.0328 | 9.71 | 11200 | 0.1080 | 0.8447 | 0.8741 | 0.8591 | 0.9794 |
| 0.0313 | 9.79 | 11300 | 0.1080 | 0.8439 | 0.8747 | 0.8590 | 0.9794 |
| 0.0295 | 9.88 | 11400 | 0.1080 | 0.8445 | 0.8739 | 0.8589 | 0.9793 |
| 0.0316 | 9.97 | 11500 | 0.1080 | 0.8445 | 0.8740 | 0.8590 | 0.9793 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Augusto777/vit-base-patch16-224-RU2-40 | Augusto777 | 2024-05-24T00:12:24Z | 219 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-23T23:21:01Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-RU2-40
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8333333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-RU2-40
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2003
- Accuracy: 0.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3226 | 0.99 | 38 | 1.2293 | 0.6 |
| 0.9048 | 2.0 | 77 | 0.7969 | 0.7 |
| 0.4039 | 2.99 | 115 | 0.6800 | 0.7167 |
| 0.281 | 4.0 | 154 | 0.8892 | 0.7667 |
| 0.1755 | 4.99 | 192 | 0.9072 | 0.7333 |
| 0.1035 | 6.0 | 231 | 0.8036 | 0.8167 |
| 0.1275 | 6.99 | 269 | 0.8627 | 0.8 |
| 0.107 | 8.0 | 308 | 0.8713 | 0.8 |
| 0.0984 | 8.99 | 346 | 0.9660 | 0.8 |
| 0.0823 | 10.0 | 385 | 1.0704 | 0.7833 |
| 0.0771 | 10.99 | 423 | 0.9409 | 0.7667 |
| 0.0527 | 12.0 | 462 | 1.0052 | 0.7833 |
| 0.0708 | 12.99 | 500 | 0.9578 | 0.8 |
| 0.0562 | 14.0 | 539 | 1.0712 | 0.8167 |
| 0.0467 | 14.99 | 577 | 1.0586 | 0.8167 |
| 0.0445 | 16.0 | 616 | 1.2066 | 0.7667 |
| 0.0474 | 16.99 | 654 | 1.1863 | 0.75 |
| 0.0263 | 18.0 | 693 | 1.1207 | 0.8167 |
| 0.0307 | 18.99 | 731 | 1.1813 | 0.8167 |
| 0.0393 | 20.0 | 770 | 1.3761 | 0.75 |
| 0.0475 | 20.99 | 808 | 1.3008 | 0.7667 |
| 0.0215 | 22.0 | 847 | 1.2625 | 0.7333 |
| 0.0311 | 22.99 | 885 | 1.1508 | 0.8 |
| 0.027 | 24.0 | 924 | 1.3035 | 0.7667 |
| 0.0251 | 24.99 | 962 | 1.2270 | 0.7667 |
| 0.0161 | 26.0 | 1001 | 1.1470 | 0.8167 |
| 0.0258 | 26.99 | 1039 | 1.1473 | 0.8167 |
| 0.0142 | 28.0 | 1078 | 1.2326 | 0.7667 |
| 0.0151 | 28.99 | 1116 | 1.3978 | 0.7667 |
| 0.021 | 30.0 | 1155 | 1.2003 | 0.8333 |
| 0.0158 | 30.99 | 1193 | 1.2488 | 0.7667 |
| 0.0163 | 32.0 | 1232 | 1.3232 | 0.75 |
| 0.0143 | 32.99 | 1270 | 1.2467 | 0.8 |
| 0.02 | 34.0 | 1309 | 1.3176 | 0.7833 |
| 0.0128 | 34.99 | 1347 | 1.3083 | 0.7667 |
| 0.0144 | 36.0 | 1386 | 1.3080 | 0.7667 |
| 0.0109 | 36.99 | 1424 | 1.2999 | 0.8 |
| 0.0082 | 38.0 | 1463 | 1.2718 | 0.8 |
| 0.0064 | 38.99 | 1501 | 1.2588 | 0.7667 |
| 0.0097 | 39.48 | 1520 | 1.2597 | 0.7667 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-499715 | fine-tuned | 2024-05-24T00:07:29Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"Science",
"Research",
"Verification",
"Dataset",
"Claims",
"en",
"dataset:fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-499715",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-24T00:06:33Z | ---
license: apache-2.0
datasets:
- fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-499715
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Science
- Research
- Verification
- Dataset
- Claims
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
scientific claim verification
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-499715',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
gonzalu/JuggerCon_v1 | gonzalu | 2024-05-24T00:03:17Z | 0 | 0 | null | [
"cosXL",
"Juggernaut",
"High Contrast",
"Experimental",
"region:us"
] | null | 2024-05-23T15:36:18Z | ---
tags:
- cosXL
- Juggernaut
- High Contrast
- Experimental
---
___
# JuggerCon v1
<a href="JuggerCon_v1_Poster.png"><img src="JuggerCon_v1_Poster.png" alt="poster" width="600"/></a>
## Merge of [Juggernaut-X](https://huggingface.co/RunDiffusion/Juggernaut-X-v10) and [cosXL](https://huggingface.co/stabilityai/cosxl)
This model is intended to produce very specific [High Contrast] images. If you're into that sort of thing, this is a good model to use.
### Tested with the following settings:
- **Sampler:** *DPM++ 2M*
- **Scheduler:** *Karras*
- **Steps:** *30*
- **CFG:** *7*
- **Resoution:** *832x1216*
Feel free to experiement.
***
### Examples below have embedded workflow












|
ZaneHorrible/vitLarge-16-384-2e-4-batch_16_epoch_4_classes_24 | ZaneHorrible | 2024-05-24T00:02:33Z | 197 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-384",
"base_model:finetune:google/vit-large-patch16-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-23T17:42:53Z | ---
license: apache-2.0
base_model: google/vit-large-patch16-384
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vitLarge-16-384-2e-4-batch_16_epoch_4_classes_24
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9712643678160919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vitLarge-16-384-2e-4-batch_16_epoch_4_classes_24
This model is a fine-tuned version of [google/vit-large-patch16-384](https://huggingface.co/google/vit-large-patch16-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1519
- Accuracy: 0.9713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7944 | 0.03 | 100 | 0.8483 | 0.7428 |
| 1.1261 | 0.07 | 200 | 1.0595 | 0.6911 |
| 0.7575 | 0.1 | 300 | 0.5007 | 0.8534 |
| 0.3567 | 0.14 | 400 | 0.5404 | 0.8391 |
| 0.4062 | 0.17 | 500 | 0.7795 | 0.7974 |
| 0.4227 | 0.21 | 600 | 0.3598 | 0.8851 |
| 0.3436 | 0.24 | 700 | 0.4550 | 0.8693 |
| 0.7695 | 0.28 | 800 | 0.5748 | 0.8247 |
| 0.2864 | 0.31 | 900 | 0.4017 | 0.8793 |
| 0.3718 | 0.35 | 1000 | 0.5384 | 0.8420 |
| 0.2764 | 0.38 | 1100 | 0.4682 | 0.8764 |
| 0.3438 | 0.42 | 1200 | 0.4194 | 0.8807 |
| 0.4031 | 0.45 | 1300 | 0.4105 | 0.8922 |
| 0.449 | 0.49 | 1400 | 0.4499 | 0.8678 |
| 0.2249 | 0.52 | 1500 | 0.2701 | 0.9066 |
| 0.2398 | 0.56 | 1600 | 0.4124 | 0.8807 |
| 0.5759 | 0.59 | 1700 | 0.8378 | 0.7960 |
| 0.1315 | 0.63 | 1800 | 0.4757 | 0.8779 |
| 0.4481 | 0.66 | 1900 | 0.3463 | 0.9037 |
| 0.2183 | 0.7 | 2000 | 0.4291 | 0.8779 |
| 0.2101 | 0.73 | 2100 | 0.3318 | 0.9109 |
| 1.0071 | 0.77 | 2200 | 2.9399 | 0.2098 |
| 0.3426 | 0.8 | 2300 | 0.4231 | 0.9023 |
| 0.1126 | 0.84 | 2400 | 0.3609 | 0.9124 |
| 0.3954 | 0.87 | 2500 | 0.4471 | 0.8994 |
| 0.2099 | 0.91 | 2600 | 0.3465 | 0.9052 |
| 0.1982 | 0.94 | 2700 | 0.4135 | 0.8994 |
| 0.1931 | 0.98 | 2800 | 0.3306 | 0.9095 |
| 0.1721 | 1.01 | 2900 | 0.3470 | 0.9195 |
| 0.1864 | 1.04 | 3000 | 0.3814 | 0.9124 |
| 0.0652 | 1.08 | 3100 | 0.2534 | 0.9296 |
| 0.1176 | 1.11 | 3200 | 0.2744 | 0.9210 |
| 0.0988 | 1.15 | 3300 | 0.2966 | 0.9325 |
| 0.0289 | 1.18 | 3400 | 0.2021 | 0.9555 |
| 0.1465 | 1.22 | 3500 | 0.1566 | 0.9583 |
| 0.2023 | 1.25 | 3600 | 0.2803 | 0.9353 |
| 0.1042 | 1.29 | 3700 | 0.2893 | 0.9282 |
| 0.1403 | 1.32 | 3800 | 0.3145 | 0.9239 |
| 0.0786 | 1.36 | 3900 | 0.3188 | 0.9267 |
| 0.2427 | 1.39 | 4000 | 0.6615 | 0.8693 |
| 0.3187 | 1.43 | 4100 | 0.3598 | 0.9195 |
| 0.0897 | 1.46 | 4200 | 0.2778 | 0.9425 |
| 0.068 | 1.5 | 4300 | 0.3445 | 0.9124 |
| 0.2165 | 1.53 | 4400 | 0.2351 | 0.9468 |
| 0.0807 | 1.57 | 4500 | 0.3111 | 0.9310 |
| 0.007 | 1.6 | 4600 | 0.2208 | 0.9483 |
| 0.0017 | 1.64 | 4700 | 0.1943 | 0.9411 |
| 0.081 | 1.67 | 4800 | 0.3503 | 0.9239 |
| 0.0285 | 1.71 | 4900 | 0.3109 | 0.9239 |
| 0.0495 | 1.74 | 5000 | 0.1233 | 0.9641 |
| 0.0201 | 1.78 | 5100 | 0.2508 | 0.9483 |
| 0.1186 | 1.81 | 5200 | 0.3854 | 0.9210 |
| 0.0283 | 1.85 | 5300 | 0.2336 | 0.9425 |
| 0.0569 | 1.88 | 5400 | 0.2872 | 0.9425 |
| 0.0498 | 1.92 | 5500 | 0.2462 | 0.9569 |
| 0.0101 | 1.95 | 5600 | 0.2256 | 0.9511 |
| 0.0474 | 1.99 | 5700 | 0.2201 | 0.9569 |
| 0.0008 | 2.02 | 5800 | 0.2079 | 0.9526 |
| 0.0 | 2.06 | 5900 | 0.1951 | 0.9583 |
| 0.0007 | 2.09 | 6000 | 0.1449 | 0.9626 |
| 0.003 | 2.12 | 6100 | 0.1411 | 0.9670 |
| 0.0028 | 2.16 | 6200 | 0.1889 | 0.9598 |
| 0.0018 | 2.19 | 6300 | 0.2356 | 0.9511 |
| 0.0087 | 2.23 | 6400 | 0.2185 | 0.9569 |
| 0.0169 | 2.26 | 6500 | 0.1898 | 0.9583 |
| 0.0003 | 2.3 | 6600 | 0.1879 | 0.9655 |
| 0.0008 | 2.33 | 6700 | 0.1331 | 0.9713 |
| 0.0001 | 2.37 | 6800 | 0.1537 | 0.9655 |
| 0.0002 | 2.4 | 6900 | 0.2148 | 0.9598 |
| 0.0079 | 2.44 | 7000 | 0.1258 | 0.9698 |
| 0.0004 | 2.47 | 7100 | 0.1557 | 0.9698 |
| 0.0 | 2.51 | 7200 | 0.1376 | 0.9698 |
| 0.0007 | 2.54 | 7300 | 0.1238 | 0.9713 |
| 0.0 | 2.58 | 7400 | 0.1433 | 0.9670 |
| 0.0023 | 2.61 | 7500 | 0.1537 | 0.9684 |
| 0.0004 | 2.65 | 7600 | 0.1302 | 0.9727 |
| 0.0002 | 2.68 | 7700 | 0.1557 | 0.9698 |
| 0.0013 | 2.72 | 7800 | 0.1614 | 0.9698 |
| 0.0 | 2.75 | 7900 | 0.1713 | 0.9670 |
| 0.0 | 2.79 | 8000 | 0.1458 | 0.9698 |
| 0.0 | 2.82 | 8100 | 0.1453 | 0.9698 |
| 0.0 | 2.86 | 8200 | 0.1527 | 0.9670 |
| 0.0 | 2.89 | 8300 | 0.1508 | 0.9698 |
| 0.0001 | 2.93 | 8400 | 0.1544 | 0.9713 |
| 0.0 | 2.96 | 8500 | 0.1506 | 0.9713 |
| 0.0 | 3.0 | 8600 | 0.1519 | 0.9713 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
sottoflavio/my-awesome-model | sottoflavio | 2024-05-24T00:01:19Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-24T00:01:17Z | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
TroyDoesAI/Mermaid-Contextual-Obedient-RAG-Phi-3-medium-128k-instruct-18B | TroyDoesAI | 2024-05-23T23:53:05Z | 10 | 2 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"license:cc-by-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T17:10:11Z | ---
license: cc-by-nd-4.0
---
|
hgnoi/nlZJHIjeip4xwIW1 | hgnoi | 2024-05-23T23:53:00Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T23:51:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wwe180/Llama3-15B-ShenNu-lora-v0.1 | wwe180 | 2024-05-23T23:47:32Z | 0 | 0 | null | [
"safetensors",
"lora",
"Llama3",
"base_model:wwe180/Llama3-15B-lingyang-v0.1",
"base_model:adapter:wwe180/Llama3-15B-lingyang-v0.1",
"region:us"
] | null | 2024-05-23T22:04:25Z | ---
tags:
- lora
- Llama3
base_model:
- wwe180/Llama3-15B-lingyang-v0.1
---
#该模型是实验性的,因此无法保证结果。
# Llama3-15B-神女-lora-v0.1
Llama3-15B-ShenNu-lora-v0.1 是:
[Llama3-15B-lingyang-v0.1](https://huggingface.co/wwe180/Llama3-15B-lingyang-v0.1)的lora
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "wwe180/Llama3-15B-lingyang-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"]) |
ulrica/Dermatology-LLaVA-lora-7b | ulrica | 2024-05-23T23:46:33Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T22:19:29Z | ---
license: apache-2.0
---
|
RichardErkhov/shadowml_-_BeagSake-7B-gguf | RichardErkhov | 2024-05-23T23:46:31Z | 4 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-23T21:24:58Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BeagSake-7B - GGUF
- Model creator: https://huggingface.co/shadowml/
- Original model: https://huggingface.co/shadowml/BeagSake-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [BeagSake-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [BeagSake-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [BeagSake-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [BeagSake-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [BeagSake-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [BeagSake-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [BeagSake-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [BeagSake-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [BeagSake-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [BeagSake-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [BeagSake-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [BeagSake-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [BeagSake-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [BeagSake-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [BeagSake-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [BeagSake-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [BeagSake-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [BeagSake-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [BeagSake-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [BeagSake-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [BeagSake-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [BeagSake-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/shadowml_-_BeagSake-7B-gguf/blob/main/BeagSake-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- shadowml/BeagleSempra-7B
- shadowml/WestBeagle-7B
model-index:
- name: BeagSake-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.23
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 72.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/BeagSake-7B
name: Open LLM Leaderboard
---
# BeagSake-7B
BeagSake-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [shadowml/BeagleSempra-7B](https://huggingface.co/shadowml/BeagleSempra-7B)
* [shadowml/WestBeagle-7B](https://huggingface.co/shadowml/WestBeagle-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: shadowml/BeagleSempra-7B
layer_range: [0, 32]
- model: shadowml/WestBeagle-7B
layer_range: [0, 32]
merge_method: slerp
base_model: shadowml/BeagleSempra-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shadowml/BeagSake-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__BeagSake-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.38|
|AI2 Reasoning Challenge (25-Shot)|72.44|
|HellaSwag (10-Shot) |88.39|
|MMLU (5-Shot) |65.23|
|TruthfulQA (0-shot) |72.27|
|Winogrande (5-shot) |82.16|
|GSM8k (5-shot) |71.80|
|
UtkuCicek/utku_marks | UtkuCicek | 2024-05-23T23:32:38Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-23T19:17:14Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers-training
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - UtkuCicek/utku_marks
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **UtkuCicek/new-marks-data** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: italian style mini pizza with mozerrella on the side:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf | RichardErkhov | 2024-05-23T23:23:24Z | 170 | 1 | null | [
"gguf",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-23T20:53:40Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
microsoft_WizardLM-2-7B - GGUF
- Model creator: https://huggingface.co/lucyknada/
- Original model: https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [microsoft_WizardLM-2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [microsoft_WizardLM-2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [microsoft_WizardLM-2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [microsoft_WizardLM-2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [microsoft_WizardLM-2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [microsoft_WizardLM-2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [microsoft_WizardLM-2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [microsoft_WizardLM-2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [microsoft_WizardLM-2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [microsoft_WizardLM-2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [microsoft_WizardLM-2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [microsoft_WizardLM-2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [microsoft_WizardLM-2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [microsoft_WizardLM-2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [microsoft_WizardLM-2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [microsoft_WizardLM-2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [microsoft_WizardLM-2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [microsoft_WizardLM-2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [microsoft_WizardLM-2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [microsoft_WizardLM-2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [microsoft_WizardLM-2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [microsoft_WizardLM-2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/lucyknada_-_microsoft_WizardLM-2-7B-gguf/blob/main/microsoft_WizardLM-2-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
---
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
|
InferenceIllusionist/Llama3-ChatQA-1.5-8B-iMat-GGUF | InferenceIllusionist | 2024-05-23T23:14:01Z | 76 | 0 | null | [
"gguf",
"merge",
"llama3",
"iMat",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-21T21:50:32Z | ---
tags:
- merge
- gguf
- llama3
- iMat
license: llama3
---
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# Llama3-ChatQA-1.5-8B-iMat-GGUF
Quantized from fp16.
* Weighted quantizations were creating using fp16 GGUF and [groups_merged-enhancedV2-TurboMini.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-9432658) in 234 chunks and n_ctx=512
* This method of calculating the importance matrix showed improvements in some areas for Mistral 7b and Llama3 8b models, see above post for details
* The enhancedv2-turbomini file appends snippets from turboderp's calibration data to the standard groups_merged.txt file
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
<b>Tip:</b> Pick a file size under your GPU's VRAM while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
BFloat16 model card can be found [here](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) |
ND911/EclecticEuphoria_DPO_BMM-SDXL | ND911 | 2024-05-23T23:05:27Z | 0 | 1 | null | [
"region:us"
] | null | 2024-05-23T22:20:21Z |
# EclecticEuphoria_DPO_BMM
This is a beast combo of everything, no other way to describe it.
An All Purpose Model
* [Civitai Images](https://civitai.com/collections/1213202)
# Comfyui

# RuinedFoocus

# StableSwarm

# Super Simple Workflow

# Stable Cascade to SDXL with Prompt Enhancer Workflow
.png)
|
ZaneHorrible/ViTL_16_384_1e_4_batch_16_epoch_4_classes_24 | ZaneHorrible | 2024-05-23T23:04:27Z | 217 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-384",
"base_model:finetune:google/vit-large-patch16-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-23T18:47:25Z | ---
license: apache-2.0
base_model: google/vit-large-patch16-384
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: Aradam_ViTL-16-384-2e-4-batch_16_epoch_4_classes_24
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9698275862068966
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Aradam_ViTL-16-384-2e-4-batch_16_epoch_4_classes_24
This model is a fine-tuned version of [google/vit-large-patch16-384](https://huggingface.co/google/vit-large-patch16-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1097
- Accuracy: 0.9698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1511 | 0.03 | 100 | 0.8900 | 0.7471 |
| 0.8497 | 0.07 | 200 | 0.8558 | 0.7687 |
| 0.6297 | 0.1 | 300 | 0.5995 | 0.8132 |
| 0.5735 | 0.14 | 400 | 0.4456 | 0.8649 |
| 0.307 | 0.17 | 500 | 0.4031 | 0.8851 |
| 0.3961 | 0.21 | 600 | 0.4865 | 0.8506 |
| 0.6511 | 0.24 | 700 | 0.5270 | 0.8491 |
| 0.4526 | 0.28 | 800 | 0.6105 | 0.8376 |
| 0.4071 | 0.31 | 900 | 0.3936 | 0.8937 |
| 0.2729 | 0.35 | 1000 | 0.3287 | 0.8994 |
| 0.4277 | 0.38 | 1100 | 0.5402 | 0.8621 |
| 0.2588 | 0.42 | 1200 | 0.3344 | 0.9023 |
| 0.3034 | 0.45 | 1300 | 0.3269 | 0.8922 |
| 0.2463 | 0.49 | 1400 | 0.4931 | 0.8563 |
| 0.1999 | 0.52 | 1500 | 0.3622 | 0.9037 |
| 0.1483 | 0.56 | 1600 | 0.3114 | 0.9066 |
| 0.1266 | 0.59 | 1700 | 0.3893 | 0.8894 |
| 0.1131 | 0.63 | 1800 | 0.2696 | 0.9267 |
| 0.4377 | 0.66 | 1900 | 0.2953 | 0.9224 |
| 0.1578 | 0.7 | 2000 | 0.3059 | 0.9109 |
| 0.1273 | 0.73 | 2100 | 0.2474 | 0.9267 |
| 0.077 | 0.77 | 2200 | 0.2231 | 0.9382 |
| 0.0855 | 0.8 | 2300 | 0.2795 | 0.9368 |
| 0.0756 | 0.84 | 2400 | 0.2858 | 0.9210 |
| 0.2635 | 0.87 | 2500 | 0.2563 | 0.9353 |
| 0.1622 | 0.91 | 2600 | 0.2727 | 0.9325 |
| 0.1941 | 0.94 | 2700 | 0.2450 | 0.9239 |
| 0.0144 | 0.98 | 2800 | 0.2113 | 0.9454 |
| 0.0617 | 1.01 | 2900 | 0.1612 | 0.9454 |
| 0.0188 | 1.04 | 3000 | 0.2029 | 0.9425 |
| 0.0731 | 1.08 | 3100 | 0.1762 | 0.9612 |
| 0.0846 | 1.11 | 3200 | 0.1612 | 0.9569 |
| 0.0586 | 1.15 | 3300 | 0.2737 | 0.9353 |
| 0.0258 | 1.18 | 3400 | 0.1310 | 0.9670 |
| 0.0665 | 1.22 | 3500 | 0.1515 | 0.9540 |
| 0.0143 | 1.25 | 3600 | 0.2254 | 0.9440 |
| 0.0842 | 1.29 | 3700 | 0.2393 | 0.9468 |
| 0.0019 | 1.32 | 3800 | 0.1660 | 0.9526 |
| 0.013 | 1.36 | 3900 | 0.1413 | 0.9684 |
| 0.0177 | 1.39 | 4000 | 0.1455 | 0.9641 |
| 0.0128 | 1.43 | 4100 | 0.1291 | 0.9641 |
| 0.0222 | 1.46 | 4200 | 0.1567 | 0.9526 |
| 0.0017 | 1.5 | 4300 | 0.1640 | 0.9569 |
| 0.0009 | 1.53 | 4400 | 0.1861 | 0.9612 |
| 0.0007 | 1.57 | 4500 | 0.1440 | 0.9713 |
| 0.0026 | 1.6 | 4600 | 0.0940 | 0.9784 |
| 0.0006 | 1.64 | 4700 | 0.1282 | 0.9655 |
| 0.0023 | 1.67 | 4800 | 0.1341 | 0.9698 |
| 0.0002 | 1.71 | 4900 | 0.1099 | 0.9727 |
| 0.0013 | 1.74 | 5000 | 0.0872 | 0.9756 |
| 0.0001 | 1.78 | 5100 | 0.0908 | 0.9784 |
| 0.0006 | 1.81 | 5200 | 0.1034 | 0.9727 |
| 0.0009 | 1.85 | 5300 | 0.0940 | 0.9727 |
| 0.0 | 1.88 | 5400 | 0.1236 | 0.9655 |
| 0.0003 | 1.92 | 5500 | 0.1180 | 0.9684 |
| 0.0001 | 1.95 | 5600 | 0.1091 | 0.9698 |
| 0.0001 | 1.99 | 5700 | 0.1097 | 0.9698 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
tsavage68/MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO | tsavage68 | 2024-05-23T22:54:22Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T07:31:23Z | ---
license: llama3
base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6020
- Rewards/chosen: 0.7087
- Rewards/rejected: 0.4830
- Rewards/accuracies: 0.7341
- Rewards/margins: 0.2257
- Logps/rejected: -32.2447
- Logps/chosen: -28.9661
- Logits/rejected: -0.7358
- Logits/chosen: -0.7350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6925 | 0.0489 | 50 | 0.6930 | -0.0016 | -0.0023 | 0.5011 | 0.0007 | -33.8624 | -31.3338 | -0.7320 | -0.7314 |
| 0.6841 | 0.0977 | 100 | 0.6807 | 0.2459 | 0.2195 | 0.6549 | 0.0264 | -33.1233 | -30.5088 | -0.7330 | -0.7323 |
| 0.6562 | 0.1466 | 150 | 0.6641 | 0.3800 | 0.3137 | 0.6791 | 0.0663 | -32.8092 | -30.0619 | -0.7310 | -0.7303 |
| 0.6334 | 0.1954 | 200 | 0.6509 | 0.1334 | 0.0355 | 0.7165 | 0.0979 | -33.7366 | -30.8837 | -0.7311 | -0.7304 |
| 0.6544 | 0.2443 | 250 | 0.6415 | 0.2943 | 0.1754 | 0.7209 | 0.1189 | -33.2701 | -30.3474 | -0.7311 | -0.7303 |
| 0.6145 | 0.2931 | 300 | 0.6304 | 0.3548 | 0.2099 | 0.7385 | 0.1448 | -33.1550 | -30.1459 | -0.7317 | -0.7310 |
| 0.6171 | 0.3420 | 350 | 0.6223 | 0.4756 | 0.3093 | 0.7341 | 0.1663 | -32.8238 | -29.7432 | -0.7336 | -0.7328 |
| 0.5911 | 0.3908 | 400 | 0.6181 | 0.6387 | 0.4602 | 0.7121 | 0.1785 | -32.3208 | -29.1996 | -0.7334 | -0.7327 |
| 0.5942 | 0.4397 | 450 | 0.6129 | 0.6839 | 0.4904 | 0.7253 | 0.1935 | -32.2203 | -29.0489 | -0.7347 | -0.7339 |
| 0.6096 | 0.4885 | 500 | 0.6090 | 0.7785 | 0.5741 | 0.7297 | 0.2044 | -31.9411 | -28.7335 | -0.7351 | -0.7343 |
| 0.5671 | 0.5374 | 550 | 0.6068 | 0.7522 | 0.5395 | 0.7275 | 0.2127 | -32.0566 | -28.8212 | -0.7355 | -0.7347 |
| 0.6066 | 0.5862 | 600 | 0.6061 | 0.7215 | 0.5067 | 0.7209 | 0.2147 | -32.1657 | -28.9236 | -0.7356 | -0.7348 |
| 0.5816 | 0.6351 | 650 | 0.6046 | 0.6882 | 0.4692 | 0.7231 | 0.2191 | -32.2910 | -29.0344 | -0.7356 | -0.7348 |
| 0.5968 | 0.6839 | 700 | 0.6030 | 0.6956 | 0.4723 | 0.7451 | 0.2233 | -32.2804 | -29.0097 | -0.7352 | -0.7344 |
| 0.6132 | 0.7328 | 750 | 0.6042 | 0.7103 | 0.4891 | 0.7297 | 0.2212 | -32.2246 | -28.9608 | -0.7354 | -0.7346 |
| 0.6133 | 0.7816 | 800 | 0.6021 | 0.6956 | 0.4697 | 0.7407 | 0.2258 | -32.2890 | -29.0099 | -0.7358 | -0.7350 |
| 0.6397 | 0.8305 | 850 | 0.6029 | 0.7027 | 0.4791 | 0.7341 | 0.2236 | -32.2579 | -28.9862 | -0.7354 | -0.7346 |
| 0.6273 | 0.8793 | 900 | 0.6030 | 0.7126 | 0.4896 | 0.7341 | 0.2230 | -32.2229 | -28.9533 | -0.7356 | -0.7348 |
| 0.5996 | 0.9282 | 950 | 0.6019 | 0.7087 | 0.4830 | 0.7341 | 0.2257 | -32.2447 | -28.9661 | -0.7358 | -0.7350 |
| 0.5319 | 0.9770 | 1000 | 0.6020 | 0.7087 | 0.4830 | 0.7341 | 0.2257 | -32.2447 | -28.9661 | -0.7358 | -0.7350 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
legraphista/RoLlama2-7b-Chat-GGUF | legraphista | 2024-05-23T22:52:08Z | 27 | 1 | gguf | [
"gguf",
"quantized",
"GGUF",
"text-generation",
"ro",
"base_model:OpenLLM-Ro/RoLlama2-7b-Chat",
"base_model:quantized:OpenLLM-Ro/RoLlama2-7b-Chat",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | 2024-05-17T16:03:00Z | ---
language:
- ro
license: cc-by-nc-4.0
quantized_by: legraphista
pipeline_tag: text-generation
library_name: gguf
inference: false
base_model: OpenLLM-Ro/RoLlama2-7b-Chat
tags:
- quantized
- GGUF
---
# RoLlama2-7b-Chat-GGUF
- This is GGUF quantized version of [OpenLLM-Ro/RoLlama2-7b-Chat](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Chat) created using llama.cpp |
isaaclee/duration_mistral_train_run5 | isaaclee | 2024-05-23T22:51:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-23T18:17:05Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: duration_mistral_train_run5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# duration_mistral_train_run5
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
legraphista/RoLlama2-7b-Instruct-GGUF | legraphista | 2024-05-23T22:51:20Z | 43 | 1 | gguf | [
"gguf",
"quantized",
"GGUF",
"text-generation",
"ro",
"base_model:OpenLLM-Ro/RoLlama2-7b-Instruct",
"base_model:quantized:OpenLLM-Ro/RoLlama2-7b-Instruct",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | 2024-05-17T15:29:12Z | ---
language:
- ro
license: cc-by-nc-4.0
quantized_by: legraphista
pipeline_tag: text-generation
library_name: gguf
inference: false
base_model: OpenLLM-Ro/RoLlama2-7b-Instruct
tags:
- quantized
- GGUF
---
# RoLlama2-7b-Instruct-GGUF
- This is GGUF quantized version of [OpenLLM-Ro/RoLlama2-7b-Instruct](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct) created using llama.cpp |
tsavage68/MedQA_L3_1000steps_1e8rate_03beta_CSFTDPO | tsavage68 | 2024-05-23T22:45:35Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T04:02:02Z | ---
license: llama3
base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_1000steps_1e8rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_1000steps_1e8rate_03beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6947
- Rewards/chosen: 0.0002
- Rewards/rejected: 0.0027
- Rewards/accuracies: 0.4615
- Rewards/margins: -0.0026
- Logps/rejected: -33.8457
- Logps/chosen: -31.3279
- Logits/rejected: -0.7320
- Logits/chosen: -0.7314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6937 | 0.0489 | 50 | 0.6939 | -0.0056 | -0.0047 | 0.4769 | -0.0009 | -33.8705 | -31.3473 | -0.7322 | -0.7315 |
| 0.6972 | 0.0977 | 100 | 0.6930 | -0.0029 | -0.0036 | 0.5055 | 0.0007 | -33.8668 | -31.3383 | -0.7322 | -0.7316 |
| 0.6918 | 0.1466 | 150 | 0.6933 | 0.0057 | 0.0055 | 0.4901 | 0.0002 | -33.8364 | -31.3096 | -0.7321 | -0.7314 |
| 0.6951 | 0.1954 | 200 | 0.6941 | -0.0012 | 0.0002 | 0.4769 | -0.0014 | -33.8541 | -31.3324 | -0.7320 | -0.7313 |
| 0.6926 | 0.2443 | 250 | 0.6930 | 0.0029 | 0.0022 | 0.4857 | 0.0006 | -33.8474 | -31.3190 | -0.7319 | -0.7312 |
| 0.6947 | 0.2931 | 300 | 0.6929 | -0.0006 | -0.0016 | 0.4967 | 0.0010 | -33.8603 | -31.3307 | -0.7323 | -0.7316 |
| 0.6987 | 0.3420 | 350 | 0.6939 | 0.0041 | 0.0052 | 0.5121 | -0.0010 | -33.8377 | -31.3148 | -0.7324 | -0.7317 |
| 0.695 | 0.3908 | 400 | 0.6929 | 0.0111 | 0.0101 | 0.4967 | 0.0010 | -33.8212 | -31.2917 | -0.7321 | -0.7315 |
| 0.6953 | 0.4397 | 450 | 0.6941 | 0.0051 | 0.0066 | 0.4857 | -0.0015 | -33.8330 | -31.3115 | -0.7327 | -0.7320 |
| 0.6939 | 0.4885 | 500 | 0.6947 | 0.0022 | 0.0048 | 0.4637 | -0.0027 | -33.8387 | -31.3213 | -0.7325 | -0.7318 |
| 0.6982 | 0.5374 | 550 | 0.6922 | 0.0071 | 0.0047 | 0.5121 | 0.0023 | -33.8391 | -31.3050 | -0.7325 | -0.7318 |
| 0.6835 | 0.5862 | 600 | 0.6939 | 0.0064 | 0.0074 | 0.4945 | -0.0010 | -33.8303 | -31.3073 | -0.7321 | -0.7314 |
| 0.6868 | 0.6351 | 650 | 0.6937 | -0.0034 | -0.0029 | 0.4989 | -0.0006 | -33.8644 | -31.3400 | -0.7323 | -0.7316 |
| 0.6882 | 0.6839 | 700 | 0.6939 | -0.0024 | -0.0013 | 0.4725 | -0.0011 | -33.8593 | -31.3366 | -0.7323 | -0.7317 |
| 0.6947 | 0.7328 | 750 | 0.6936 | 0.0031 | 0.0035 | 0.5077 | -0.0004 | -33.8431 | -31.3183 | -0.7321 | -0.7314 |
| 0.6968 | 0.7816 | 800 | 0.6947 | -0.0034 | -0.0007 | 0.4637 | -0.0027 | -33.8571 | -31.3399 | -0.7319 | -0.7313 |
| 0.6919 | 0.8305 | 850 | 0.6947 | 0.0001 | 0.0028 | 0.4593 | -0.0027 | -33.8456 | -31.3283 | -0.7320 | -0.7314 |
| 0.6962 | 0.8793 | 900 | 0.6947 | 0.0002 | 0.0027 | 0.4615 | -0.0026 | -33.8457 | -31.3279 | -0.7320 | -0.7314 |
| 0.6866 | 0.9282 | 950 | 0.6947 | 0.0002 | 0.0027 | 0.4615 | -0.0026 | -33.8457 | -31.3279 | -0.7320 | -0.7314 |
| 0.6919 | 0.9770 | 1000 | 0.6947 | 0.0002 | 0.0027 | 0.4615 | -0.0026 | -33.8457 | -31.3279 | -0.7320 | -0.7314 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
BotaniBrain/pubmed_tinyllama | BotaniBrain | 2024-05-23T22:44:44Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T22:10:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
legraphista/aya-23-35B-GGUF | legraphista | 2024-05-23T22:44:21Z | 85 | 2 | null | [
"gguf",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-23-35B",
"base_model:quantized:CohereForAI/aya-23-35B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-23T16:46:51Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
quantized_by: legraphista
pipeline_tag: text-generation
base_model: CohereForAI/aya-23-35B
---
# aya-23-35B-GGUF
- This is GGUF quantized version of [CohereForAI/aya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B) created using llama.cpp [74f33adf](https://github.com/ggerganov/llama.cpp/tree/74f33adf) |
mlx-community/aya-23-35B-8bit | mlx-community | 2024-05-23T22:33:57Z | 63 | 2 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mlx",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T14:46:46Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
library_name: transformers
tags:
- mlx
---
# mlx-community/aya-23-35B-8bit
The Model [mlx-community/aya-23-35B-8bit](https://huggingface.co/mlx-community/aya-23-35B-8bit) was converted to MLX format from [CohereForAI/aya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B) using mlx-lm version **0.13.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/aya-23-35B-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
tsavage68/MedQA_L3_1000steps_1e5rate_05beta_CSFTDPO | tsavage68 | 2024-05-23T22:27:59Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T09:51:46Z | ---
license: llama3
base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_1000steps_1e5rate_05beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_1000steps_1e5rate_05beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7867
- Rewards/chosen: -10.2874
- Rewards/rejected: -9.4675
- Rewards/accuracies: 0.4330
- Rewards/margins: -0.8198
- Logps/rejected: -52.7899
- Logps/chosen: -51.9033
- Logits/rejected: -0.3129
- Logits/chosen: -0.3128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.9373 | 0.0489 | 50 | 1.5325 | 0.6891 | -0.1945 | 0.5912 | 0.8836 | -34.2439 | -29.9504 | -1.1200 | -1.1197 |
| 3.7169 | 0.0977 | 100 | 3.7845 | -9.7504 | -8.8431 | 0.4527 | -0.9074 | -51.5409 | -50.8294 | -0.6137 | -0.6138 |
| 5.2014 | 0.1466 | 150 | 5.2600 | -22.3993 | -21.8605 | 0.4681 | -0.5389 | -77.5758 | -76.1272 | -1.3215 | -1.3217 |
| 5.4743 | 0.1954 | 200 | 3.9034 | -7.1491 | -6.2277 | 0.4176 | -0.9214 | -46.3103 | -45.6268 | -0.6483 | -0.6486 |
| 3.0731 | 0.2443 | 250 | 4.1865 | -11.6364 | -10.1791 | 0.4198 | -1.4572 | -54.2131 | -54.6012 | -0.7051 | -0.7056 |
| 5.7952 | 0.2931 | 300 | 3.6683 | -9.2381 | -7.9895 | 0.4264 | -1.2486 | -49.8338 | -49.8046 | -0.4055 | -0.4058 |
| 3.8474 | 0.3420 | 350 | 3.4898 | -12.7687 | -11.9414 | 0.4132 | -0.8274 | -57.7376 | -56.8660 | -0.8625 | -0.8625 |
| 5.5721 | 0.3908 | 400 | 3.4194 | -13.5468 | -12.3658 | 0.4044 | -1.1810 | -58.5864 | -58.4221 | -0.8921 | -0.8922 |
| 6.0929 | 0.4397 | 450 | 3.4518 | -12.5599 | -11.2787 | 0.4132 | -1.2812 | -56.4122 | -56.4483 | -0.6596 | -0.6596 |
| 5.4036 | 0.4885 | 500 | 3.4349 | -13.3250 | -12.3700 | 0.4264 | -0.9550 | -58.5948 | -57.9785 | -0.4398 | -0.4397 |
| 4.2614 | 0.5374 | 550 | 3.4447 | -13.2741 | -12.0523 | 0.4132 | -1.2218 | -57.9595 | -57.8767 | -0.2318 | -0.2318 |
| 5.0683 | 0.5862 | 600 | 3.6325 | -10.9169 | -9.7136 | 0.4242 | -1.2033 | -53.2821 | -53.1624 | 0.0024 | 0.0023 |
| 2.8041 | 0.6351 | 650 | 3.3753 | -13.7510 | -12.4756 | 0.4110 | -1.2754 | -58.8060 | -58.8306 | -0.4253 | -0.4254 |
| 2.852 | 0.6839 | 700 | 3.2123 | -11.3782 | -10.1837 | 0.4132 | -1.1945 | -54.2221 | -54.0849 | -0.3353 | -0.3353 |
| 3.1506 | 0.7328 | 750 | 2.9861 | -10.9246 | -9.9019 | 0.4198 | -1.0227 | -53.6587 | -53.1778 | -0.3577 | -0.3577 |
| 2.9206 | 0.7816 | 800 | 2.8476 | -10.3118 | -9.4465 | 0.4264 | -0.8653 | -52.7479 | -51.9522 | -0.2881 | -0.2880 |
| 3.6047 | 0.8305 | 850 | 2.8115 | -10.1979 | -9.3565 | 0.4308 | -0.8414 | -52.5679 | -51.7243 | -0.3016 | -0.3015 |
| 2.4799 | 0.8793 | 900 | 2.7874 | -10.3005 | -9.4828 | 0.4308 | -0.8177 | -52.8204 | -51.9295 | -0.3147 | -0.3146 |
| 2.8467 | 0.9282 | 950 | 2.7864 | -10.2878 | -9.4711 | 0.4330 | -0.8167 | -52.7969 | -51.9040 | -0.3132 | -0.3130 |
| 2.2638 | 0.9770 | 1000 | 2.7867 | -10.2874 | -9.4675 | 0.4330 | -0.8198 | -52.7899 | -51.9033 | -0.3129 | -0.3128 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
lmbelo/Meta-Llama-3-8B-Function-Calling | lmbelo | 2024-05-23T22:25:29Z | 7 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"license:llama3",
"region:us"
] | text-generation | 2024-05-23T22:02:19Z | ---
language:
- en
license: llama3
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- mlx
pipeline_tag: text-generation
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
# lmbelo/Meta-Llama-3-8B-Function-Calling
The Model [lmbelo/Meta-Llama-3-8B-Function-Calling](https://huggingface.co/lmbelo/Meta-Llama-3-8B-Function-Calling) was converted to MLX format from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using mlx-lm version **0.13.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("lmbelo/Meta-Llama-3-8B-Function-Calling")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
Augusto777/vit-base-patch16-224-R1-40 | Augusto777 | 2024-05-23T22:25:03Z | 198 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-23T21:34:06Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-R1-40
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7540983606557377
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-R1-40
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7212
- Accuracy: 0.7541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3233 | 0.99 | 38 | 1.2355 | 0.5574 |
| 0.8643 | 1.99 | 76 | 0.9297 | 0.5902 |
| 0.4464 | 2.98 | 114 | 1.1190 | 0.6393 |
| 0.3092 | 4.0 | 153 | 0.9861 | 0.7049 |
| 0.1628 | 4.99 | 191 | 1.1221 | 0.6721 |
| 0.121 | 5.99 | 229 | 1.1710 | 0.6885 |
| 0.1138 | 6.98 | 267 | 1.1993 | 0.7213 |
| 0.1124 | 8.0 | 306 | 1.2636 | 0.6885 |
| 0.0748 | 8.99 | 344 | 1.3881 | 0.7049 |
| 0.0877 | 9.99 | 382 | 1.2892 | 0.7213 |
| 0.0642 | 10.98 | 420 | 1.3759 | 0.7049 |
| 0.0675 | 12.0 | 459 | 1.4283 | 0.7213 |
| 0.0694 | 12.99 | 497 | 1.3616 | 0.7213 |
| 0.0689 | 13.99 | 535 | 1.3864 | 0.7213 |
| 0.0378 | 14.98 | 573 | 1.4322 | 0.7213 |
| 0.0472 | 16.0 | 612 | 1.6004 | 0.7213 |
| 0.044 | 16.99 | 650 | 1.5810 | 0.7049 |
| 0.0386 | 17.99 | 688 | 1.6404 | 0.6885 |
| 0.0341 | 18.98 | 726 | 1.5698 | 0.7377 |
| 0.0328 | 20.0 | 765 | 1.6720 | 0.6885 |
| 0.0444 | 20.99 | 803 | 1.6269 | 0.7213 |
| 0.0342 | 21.99 | 841 | 1.6345 | 0.7377 |
| 0.0324 | 22.98 | 879 | 1.7916 | 0.7049 |
| 0.023 | 24.0 | 918 | 1.8753 | 0.6885 |
| 0.048 | 24.99 | 956 | 1.7679 | 0.7377 |
| 0.0202 | 25.99 | 994 | 1.7212 | 0.7541 |
| 0.0336 | 26.98 | 1032 | 1.7305 | 0.7377 |
| 0.0163 | 28.0 | 1071 | 1.7576 | 0.7049 |
| 0.0186 | 28.99 | 1109 | 1.7540 | 0.7377 |
| 0.0189 | 29.99 | 1147 | 1.6594 | 0.7541 |
| 0.039 | 30.98 | 1185 | 1.7423 | 0.7213 |
| 0.0194 | 32.0 | 1224 | 1.7148 | 0.7377 |
| 0.0205 | 32.99 | 1262 | 1.6965 | 0.7377 |
| 0.0186 | 33.99 | 1300 | 1.7553 | 0.7541 |
| 0.0177 | 34.98 | 1338 | 1.7476 | 0.7377 |
| 0.0132 | 36.0 | 1377 | 1.7506 | 0.7541 |
| 0.0068 | 36.99 | 1415 | 1.6917 | 0.7377 |
| 0.0121 | 37.99 | 1453 | 1.7276 | 0.7541 |
| 0.0129 | 38.98 | 1491 | 1.7218 | 0.7541 |
| 0.0067 | 39.74 | 1520 | 1.7220 | 0.7541 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
raulgdp/Distilbert-Analisis-sentimientos | raulgdp | 2024-05-23T22:21:49Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-22T16:27:02Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Distilbert-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-uncased-AS
Este es un modelo de finetuning de [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) sobre un dataset propios de tweets.
Se logra una error cuadrático bajo lo cual quiere decir que los valores predichos son muy cercanos a los observables o gold.
- Loss: 0.3510
- Rmse: 0.2543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2091 | 1.0 | 642 | 0.1933 | 0.3052 |
| 0.1334 | 2.0 | 1284 | 0.1909 | 0.2481 |
| 0.0684 | 3.0 | 1926 | 0.2617 | 0.2466 |
| 0.0355 | 4.0 | 2568 | 0.3113 | 0.2513 |
| 0.0116 | 5.0 | 3210 | 0.3510 | 0.2543 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.0.1+cu117
- Datasets 2.18.0
- Tokenizers 0.19.1
|
xkiwilabs/lora_opComms_LLama3_v6 | xkiwilabs | 2024-05-23T22:17:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-23T22:16:58Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** xkiwilabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
colesimmons/sumerian-transliteration | colesimmons | 2024-05-23T22:16:48Z | 161 | 1 | transformers | [
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-23T06:22:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JFernandoGRE/llama3_8b_brazil_augmenteddemocracy_dups_all4_50 | JFernandoGRE | 2024-05-23T22:11:18Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T22:06:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
juanbeto/prueba | juanbeto | 2024-05-23T22:10:43Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-23T22:10:43Z | ---
license: apache-2.0
---
|
laurasltf/YangJungwonVoice | laurasltf | 2024-05-23T21:55:30Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] | text-to-image | 2024-05-22T16:50:58Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/IMG_7234.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: openrail
---
# Yang Jungwon
<Gallery />
## Download model
[Download](/laurasltf/YangJungwonVoice/tree/main) them in the Files & versions tab.
|
uf-aice-lab/BLIP_Math_Generation_Classification | uf-aice-lab | 2024-05-23T21:52:54Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-05-23T20:17:03Z | ---
license: mit
---
# BLIPNet Model
This is the structure of the BLIPNet model. You can load the model with this structure, or you can create a bigger model for your specific task.
## Model Structure
```python
import torch
import torch.nn as nn
from transformers import BlipForConditionalGeneration
class BLIPNet(torch.nn.Module):
def __init__(self):
super().__init__()
# Generation Model
self.model = BlipForConditionalGeneration.from_pretrained("Salesforceblip-image-captioning-base", cache_dir="model")
# Same with https://huggingface.co/uf-aice-lab/BLIP-Math
self.ebd_dim = 443136
# Classification Model
fc_dim = 64 # You can choose a higher number for better performance, for example, 1024.
self.head = nn.Sequential(
nn.Linear(self.ebd_dim, fc_dim),
nn.ReLU(),
)
self.output1= nn.Linear(fc_dim, 5) # 5 classes
def forward(self, pixel_values, input_ids):
outputs = self.model(input_ids=input_ids, pixel_values=pixel_values, labels=input_ids)
image_text_embeds = self.model.vision_model(pixel_values, return_dict=True).last_hidden_state
image_text_embeds = self.head(image_text_embeds.view(-1, self.ebd_dim))
# A classification model is based on embeddings from a generative model to leverage BLIP's powerful image-text encoding capabilities.
logits = self.output1(image_text_embeds)
# generated text, probabilities of classification
return outputs, logits
model = BLIPNet()
model.load_state_dict(torch.load("BLILP_Generation_Classification.bin"), strict=False)
You need to input the sample in the same way as shown in the example provided at: https://huggingface.co/uf-aice-lab/BLIP-Math
Then you can get the generated text and classification score simultaneously. |
stanrom/internlm-xcomposer2-7b-4bit | stanrom | 2024-05-23T21:52:15Z | 15 | 0 | transformers | [
"transformers",
"internlm",
"feature-extraction",
"text-generation",
"custom_code",
"arxiv:2401.16420",
"license:other",
"region:us"
] | text-generation | 2024-05-21T00:21:44Z | ---
license: other
pipeline_tag: text-generation
---
<p align="center">
<img src="logo_en.png" width="400"/>
<p>
<p align="center">
<b><font size="6">InternLM-XComposer2</font></b>
<p>
<div align="center">
[💻Github Repo](https://github.com/InternLM/InternLM-XComposer)
[Paper](https://arxiv.org/abs/2401.16420)
</div>
**InternLM-XComposer2** is a vision-language large model (VLLM) based on [InternLM2](https://github.com/InternLM/InternLM) for advanced text-image comprehension and composition.
We release InternLM-XComposer2 series in two versions:
- InternLM-XComposer2-VL: The pretrained VLLM model with InternLM2 as the initialization of the LLM, achieving strong performance on various multimodal benchmarks.
- InternLM-XComposer2: The finetuned VLLM for *Free-from Interleaved Text-Image Composition*.
This is the 4-bit version of InternLM-XComposer2, install the latest version of [auto_gptq](https://github.com/AutoGPTQ/AutoGPTQ#quick-installation) before using.
```python
import torch, auto_gptq
from PIL import Image
from transformers import AutoModel, AutoTokenizer
from auto_gptq.modeling import BaseGPTQForCausalLM
auto_gptq.modeling._base.SUPPORTED_MODELS = ["internlm"]
torch.set_grad_enabled(False)
class InternLMXComposer2QForCausalLM(BaseGPTQForCausalLM):
layers_block_name = "model.layers"
outside_layer_modules = [
'vit', 'vision_proj', 'model.tok_embeddings', 'model.norm', 'output',
]
inside_layer_modules = [
["attention.wqkv.linear"],
["attention.wo.linear"],
["feed_forward.w1.linear", "feed_forward.w3.linear"],
["feed_forward.w2.linear"],
]
# init model and tokenizer
model = InternLMXComposer2QForCausalLM.from_quantized(
'internlm/internlm-xcomposer2-7b-4bit', trust_remote_code=True, device="cuda:0").eval()
tokenizer = AutoTokenizer.from_pretrained(
'internlm/internlm-xcomposer2-7b-4bit', trust_remote_code=True)
img_path_list = [
'panda.jpg',
'bamboo.jpeg',
]
images = []
for img_path in img_path_list:
image = Image.open(img_path).convert("RGB")
image = model.vis_processor(image)
images.append(image)
image = torch.stack(images)
query = '<ImageHere> <ImageHere>please write an article based on the images. Title: my favorite animal.'
with torch.cuda.amp.autocast():
response, history = model.chat(tokenizer, query=query, image=image, history=[], do_sample=False)
print(response)
#My Favorite Animal: The Panda
#The panda, also known as the giant panda, is one of the most beloved animals in the world. These adorable creatures are native to China and can be found in the wild in a few select locations, but they are more commonly seen in captivity at zoos or wildlife reserves.
#Pandas have a distinct black-and-white coloration that makes them instantly recognizable. They are known for their love of bamboo, which they eat almost exclusively. In fact, pandas spend up to 14 hours a day eating, with the majority of their diet consisting of bamboo. Despite this seemingly unbalanced diet, pandas are actually quite healthy and have a low body fat percentage, thanks to their ability to digest bamboo efficiently.
#In addition to their unique eating habits, pandas are also known for their playful personalities. They are intelligent and curious creatures, often engaging in activities like playing with toys or climbing trees. However, they do not typically exhibit these behaviors in the wild, where they are solitary creatures who prefer to spend their time alone.
#One of the biggest threats to the panda's survival is habitat loss due to deforestation. As a result, many pandas now live in captivity, where they are cared for by dedicated staff and provided with enrichment opportunities to keep them engaged and stimulated. While it is important to protect these animals from extinction, it is also crucial to remember that they are still wild creatures and should be treated with respect and care.
#Overall, the panda is an amazing animal that has captured the hearts of people around the world. Whether you see them in the wild or in captivity, there is no denying the charm and allure of these gentle giants.
```
### Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
|
ovakimyanchris/detox_Falcon_7B_PPO | ovakimyanchris | 2024-05-23T21:45:18Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"toxic_speech",
"text-generation",
"en",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T21:19:13Z | ---
language:
- en
tags:
- toxic_speech
pipeline_tag: text-generation
---
Fine tuned detoxified Falcon-7B with PPO algorithm and Reward Model on SUPER TOXIC PROMPTS. |
annalittle/wav2vec2-xls-r-300m-finetune-xty | annalittle | 2024-05-23T21:45:09Z | 79 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:ml-superb-subset",
"base_model:Akashpb13/Swahili_xlsr",
"base_model:finetune:Akashpb13/Swahili_xlsr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-23T07:43:08Z | ---
license: apache-2.0
base_model: Akashpb13/Swahili_xlsr
tags:
- generated_from_trainer
datasets:
- ml-superb-subset
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-finetune-xty
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: ml-superb-subset
type: ml-superb-subset
config: xty
split: test[:]
args: xty
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-finetune-xty
This model is a fine-tuned version of [Akashpb13/Swahili_xlsr](https://huggingface.co/Akashpb13/Swahili_xlsr) on the ml-superb-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0086
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---:|
| 5.4419 | 26.3158 | 500 | 3.0086 | 1.0 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Avditvs/multilingual-e5-small-distill-base-0.1 | Avditvs | 2024-05-23T21:33:10Z | 10 | 5 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"Sentence Transformers",
"feature-extraction",
"sentence-similarity",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2205.13147",
"arxiv:2402.05672",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-15T16:41:30Z | ---
tags:
- Sentence Transformers
- feature-extraction
- sentence-similarity
- sentence-transformers
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
---
## Multilingual-E5-small-distill-base
This model is an attempt to distill `intfloat/multilingual-e5-base` (teacher) into `intfloat/multilingual-e5-small` (student),
as well as applying [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) to it.
This was made by trying an L2 loss to teach the student model to match the same cosine similarity on text pairs as the teacher model.
The distillation dataset is composed of about 700k multilingual sentences pairs sampled for the following 3 datasets:
- [PhilipMay/stsb_multi_mt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt)
- [castorini/mr-tydi](https://huggingface.co/datasets/castorini/mr-tydi)
- [quora](https://huggingface.co/datasets/quora)
For code, see [this github repository](https://github.com/Avditvs/matryoshka_factory)
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 12 layers and the embedding size is 384.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-small')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-small')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## MTEB Benchmark Evaluation (Subset)
| | intfloat/multilingual-e5-base | intfloat/multilingual-e5-large | intfloat/multilingual-e5-small | avditvs/multilingual-e5-small-distill-base-0.1 |
| --------------------------- | ----------------------------- | ------------------------------ | ------------------------------ | ---------------------------------------------------- |
| STS15 | 0.876 | 0.882 | 0.864 | 0.865 |
| BIOSSES | 0.870 | 0.863 | 0.857 | 0.863 |
| STS14 | 0.789 | 0.776 | 0.788 | 0.803 |
| STS12 | 0.858 | 0.873 | 0.854 | 0.856 |
| AskUbuntuDupQuestions | 0.571 | 0.577 | 0.568 | 0.574 |
| StackOverflowDupQuestions | 0.485 | 0.486 | 0.486 | 0.485 |
| AmazonReviewsClassification | 0.476 | 0.470 | 0.452 | 0.450 |
| ArguAna | 0.442 | 0.544 | 0.391 | 0.480 |
| ImdbClassification | 0.849 | 0.887 | 0.758 | 0.757 |
| STS13 | 0.756 | 0.751 | 0.764 | 0.785 |
| STSBenchmark | 0.832 | 0.836 | 0.809 | 0.818 |
| STS17 | 0.890 | 0.896 | 0.868 | 0.871 |
| SICK-R | 0.835 | 0.838 | 0.835 | 0.850 |
| STS22 | 0.645 | 0.675 | 0.640 | 0.648 |
| STS16 | 0.814 | 0.824 | 0.822 | 0.820 |
| Banking77Classification | 0.741 | 0.749 | 0.706 | 0.706 |
| average | 0.733 | 0.745 | *0.717* | **0.727** |
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('avditvs/multilingual-e5-small-distill-base')
input_texts = [
'query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
|
waleko/TikZ-llava-1.5-7b | waleko | 2024-05-23T21:32:27Z | 9 | 3 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"image-to-text",
"dataset:EgorShibaev/TikZ-short-code",
"endpoints_compatible",
"region:us"
] | image-to-text | 2024-05-23T09:43:16Z | ---
library_name: transformers
datasets:
- EgorShibaev/TikZ-short-code
pipeline_tag: image-to-text
---
# Model Card for Model ID
Fine-tuned multimodal LLaVA model for TikZ diagram generation using hand-drawn sketches.
## How to Get Started with the Model
```python
from transformers import pipeline
from PIL import Image
import requests
pipe = pipeline("image-to-text", model="waleko/TikZ-llava-1.5-7b")
url = "https://waleko.github.io/data/image.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "Assistant helps to write down the TikZ code for the user's image. USER: <image>\nWrite down the TikZ code to draw the diagram shown in the image. ASSISTANT: "
print(pipe(image, prompt=prompt)[0]['generated_text'])
```
## Training Details
### Training Data
Trained on synthetic [TikZ-short-code](https://huggingface.co/datasets/EgorShibaev/TikZ-short-code) dataset.
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
Arbi-Houssem/comondov | Arbi-Houssem | 2024-05-23T21:30:54Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T15:59:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
EthanRhys/Master-Mantis | EthanRhys | 2024-05-23T21:30:00Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2024-05-23T21:29:10Z | ---
license: openrail++
---
|
EthanRhys/Young-Cricket-Current | EthanRhys | 2024-05-23T21:27:31Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2024-05-23T21:26:38Z | ---
license: openrail++
---
|
Augusto777/vit-base-patch16-224-R1-10 | Augusto777 | 2024-05-23T21:25:22Z | 218 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-23T21:03:59Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-R1-10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7049180327868853
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-R1-10
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2451
- Accuracy: 0.7049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1675 | 0.99 | 38 | 0.9972 | 0.6393 |
| 0.5606 | 1.99 | 76 | 0.7603 | 0.6885 |
| 0.3159 | 2.98 | 114 | 0.8954 | 0.6885 |
| 0.2253 | 4.0 | 153 | 1.0227 | 0.6885 |
| 0.17 | 4.99 | 191 | 1.1025 | 0.7213 |
| 0.1174 | 5.99 | 229 | 1.1453 | 0.7377 |
| 0.1032 | 6.98 | 267 | 1.0995 | 0.6885 |
| 0.1051 | 8.0 | 306 | 1.2167 | 0.7049 |
| 0.0853 | 8.99 | 344 | 1.2042 | 0.7377 |
| 0.0802 | 9.93 | 380 | 1.2451 | 0.7049 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
lyhourt/whisper-small-clean_6 | lyhourt | 2024-05-23T21:24:22Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:lyhourt/clean_6",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-23T14:37:23Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- lyhourt/clean_6
metrics:
- wer
model-index:
- name: whisper-small-clean_6
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: lyhourt/clean_6
type: lyhourt/clean_6
metrics:
- name: Wer
type: wer
value: 28.944246737841045
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-clean_6
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the lyhourt/clean_6 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3565
- Wer: 28.9442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1583 | 0.2 | 200 | 1.1107 | 87.1886 |
| 0.0074 | 1.028 | 400 | 0.4016 | 37.2835 |
| 0.1778 | 1.228 | 600 | 0.3914 | 34.0214 |
| 0.0027 | 2.056 | 800 | 0.3441 | 28.5647 |
| 0.1441 | 2.2560 | 1000 | 0.3565 | 28.9442 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Hanqix/default-ties-mistral-merge | Hanqix | 2024-05-23T21:16:58Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"region:us"
] | null | 2024-05-23T21:11:46Z | ---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# default-ties-mistral-merge
default-ties-mistral-merge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: OpenPipe/mistral-ft-optimized-1218
parameters:
density: 0.5
weight: 0.5
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Hanqix/default-ties-mistral-merge"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
dimensionhq/sql-whisper-validation | dimensionhq | 2024-05-23T21:16:57Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T21:13:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/aya-23-35B-4bit | mlx-community | 2024-05-23T21:14:17Z | 13 | 1 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mlx",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T14:46:37Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
library_name: transformers
tags:
- mlx
---
# mlx-community/aya-23-35B-4bit
The Model [mlx-community/aya-23-35B-4bit](https://huggingface.co/mlx-community/aya-23-35B-4bit) was converted to MLX format from [CohereForAI/aya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B) using mlx-lm version **0.13.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/aya-23-35B-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
LiteLLMs/Phi-3-mini-128k-instruct-GGUF | LiteLLMs | 2024-05-23T21:08:31Z | 47 | 0 | null | [
"gguf",
"nlp",
"code",
"GGUF",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-23T12:52:21Z |
---
language:
- en
license: mit
tags:
- nlp
- code
- GGUF
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
quantized_by: andrijdavid
---
# Phi-3-mini-128k-instruct-GGUF
- Original model: [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Phi-3-mini-128k-instruct-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Phi-3-mini-128k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Phi-3-mini-128k-instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Phi-3-mini-128k-instruct-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Phi-3-mini-128k-instruct
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
| | Short Context | Long Context |
| | -- | | -- | | ----- |
| MMLU <br>5-Shot | 68.1 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 74.5 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 83.6 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 55.3 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 36.9 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 57.1 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.0 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 95.2 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 83.6 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.1 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.5 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 72.5 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65.0 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 80.6 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 78.7 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 78.0 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 63.2 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 57.9 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 62.5 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-128K-Instruct ONNX model [here](https://aka.ms/phi3-mini-128k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
<!-- original-model-card end -->
|
LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF | LiteLLMs | 2024-05-23T21:08:04Z | 41 | 1 | null | [
"gguf",
"GGUF",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-23T12:50:34Z |
---
license: apache-2.0
tags:
- GGUF
quantized_by: andrijdavid
---
# Mistral-7B-Instruct-v0.3-GGUF
- Original model: [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install huggingface_hub[hf_transfer]
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Mistral-7B-Instruct-v0.3-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<PROMPT>", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Mistral-7B-Instruct-v0.3
# Model Card for Mistral-7B-Instruct-v0.3
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Installation
It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
```
### Instruct following
```py
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
```
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
<!-- original-model-card end -->
|
fine-tuned/NFCorpus-8-8-gpt-4o-2024-05-13-610535 | fine-tuned | 2024-05-23T21:07:10Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Medical",
"Nutrition",
"Queries",
"Documents",
"Relevance",
"custom_code",
"en",
"dataset:fine-tuned/NFCorpus-8-8-gpt-4o-2024-05-13-610535",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-23T21:06:57Z | ---
license: apache-2.0
datasets:
- fine-tuned/NFCorpus-8-8-gpt-4o-2024-05-13-610535
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Medical
- Nutrition
- Queries
- Documents
- Relevance
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
medical information retrieval
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/NFCorpus-8-8-gpt-4o-2024-05-13-610535',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
UGARIT/grc-ner-xlmr | UGARIT | 2024-05-23T21:03:04Z | 121 | 1 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"grc",
"base_model:UGARIT/grc-alignment",
"base_model:finetune:UGARIT/grc-alignment",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-31T21:52:11Z | ---
language:
- grc
base_model:
- UGARIT/grc-alignment
tags:
- token-classification
license: mit
inference:
parameters:
aggregation_strategy: "first"
widget:
- text: "ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς ."
example_title: "Example 1"
---
# Named Entity Recognition for Ancient Greek
Pretrained NER tagging model for ancient Greek
# Data
We trained the models on available annotated corpora in Ancient Greek.
There are only two sizeable annotated datasets in Ancient Greek, which are currently un- der release: the first one by Berti 2023,
consists of a fully annotated text of Athenaeus’ Deipnosophists, developed in the context of the Digital Athenaeus project.
The second one by Foka et al. 2020, is a fully annotated text of Pausanias’ Periegesis Hellados, developed in the context of the
Digital Periegesis project. In addition, we used smaller corpora annotated by students and scholars on Recogito:
the Odyssey annotated by Kemp 2021; a mixed corpus including excerpts from the Library attributed to Apollodorus and from Strabo’s Geography,
annotated by Chiara Palladino; Book 1 of Xenophon’s Anabasis, created by Thomas Visser; and Demos- thenes’ Against Neaira,
created by Rachel Milio.
### Training Dataset
| | **Person** | **Location** | **NORP** | **MISC** |
|----------------|------------------|-------------------|-------------------|-------------------|
| Odyssey | 2.469 | 698 | 0 | 0 |
| Deipnosophists | 14.921 | 2.699 | 5.110 | 3.060 |
| Pausanias | 10.205 | 8.670 | 4.972 | 0 |
| Other Datasets | 3.283 | 2.040 | 1.089 | 0 |
| **Total** | **30.878** | **14.107** | **11.171** | **3.060** |
---
### Validation Dataset
| | **Person** | **Location** | **NORP** | **MISC** |
|----------------|------------------|-------------------|-------------------|-------------------|
| Xenophon | 1.190 | 796 | 857 | 0 |
# Results
| Class | Metric | Test | Validation |
|---------|-----------|--------|--------|
| **LOC** | precision | 83.33% | 88.66% |
| | recall | 81.27% | 88.94% |
| | f1 | 82.29% | 88.80% |
| **MISC** | precision | 83.25% | 0 |
| | recall | 81.21% | 0 |
| | f1 | 82.22% | 0 |
| **NORP** | precision | 88.71% | 94.76% |
| | recall | 90.76% | 94.50% |
| | f1 | 89.73% | 94.63% |
| **PER** | precision | 91.72% | 94.22% |
| | recall | 94.42% | 96.06% |
| | f1 | 93.05% | 95.13% |
| **Overall** | precision | 88.83% | 92.91% |
| | recall | 89.99% | 93.72% |
| | f1 | 89.41% | 93.32% |
| | Accuracy | 97.50% | 98.87% |
# Usage
This [colab notebook](https://colab.research.google.com/drive/1K6ER_C8d_AxBm0Yrtr628P3weH1Rxhht?usp=sharing) contains the necessary code to use the model.
```python
from transformers import pipeline
# create pipeline for NER
ner = pipeline('ner', model="UGARIT/grc-ner-xlmr", aggregation_strategy = 'first')
ner("ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς .")
```
Output
```
[{'entity_group': 'PER',
'score': 0.9999428,
'word': '',
'start': 13,
'end': 14},
{'entity_group': 'PER',
'score': 0.99994195,
'word': 'Ἀλέξανδρος',
'start': 14,
'end': 24},
{'entity_group': 'NORP',
'score': 0.9087087,
'word': 'Πέρσῃ',
'start': 32,
'end': 38},
{'entity_group': 'NORP',
'score': 0.97572577,
'word': 'Μακεδόνα',
'start': 50,
'end': 59},
{'entity_group': 'NORP',
'score': 0.9993412,
'word': 'Πέρσαι',
'start': 104,
'end': 111}]
```
# Citation:
```
@inproceedings{palladino-yousef-2024-development,
title = "Development of Robust {NER} Models and Named Entity Tagsets for {A}ncient {G}reek",
author = "Palladino, Chiara and
Yousef, Tariq",
editor = "Sprugnoli, Rachele and
Passarotti, Marco",
booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lt4hala-1.11",
pages = "89--97",
abstract = "This contribution presents a novel approach to the development and evaluation of transformer-based models for Named Entity Recognition and Classification in Ancient Greek texts. We trained two models with annotated datasets by consolidating potentially ambiguous entity types under a harmonized set of classes. Then, we tested their performance with out-of-domain texts, reproducing a real-world use case. Both models performed very well under these conditions, with the multilingual model being slightly superior on the monolingual one. In the conclusion, we emphasize current limitations due to the scarcity of high-quality annotated corpora and to the lack of cohesive annotation strategies for ancient languages.",
}
``` |
mrovejaxd/FNST_trad_g | mrovejaxd | 2024-05-23T21:01:53Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-22T23:03:33Z | ---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: FNST_trad_g
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FNST_trad_g
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8709
- Accuracy: 0.6667
- F1: 0.6510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.1501 | 1.0 | 1000 | 1.0940 | 0.5208 | 0.3539 |
| 0.9834 | 2.0 | 2000 | 0.9339 | 0.5883 | 0.5317 |
| 0.8949 | 3.0 | 3000 | 0.8767 | 0.6233 | 0.6071 |
| 0.8316 | 4.0 | 4000 | 0.8421 | 0.64 | 0.6185 |
| 0.7873 | 5.0 | 5000 | 0.8242 | 0.645 | 0.6211 |
| 0.7676 | 6.0 | 6000 | 0.8323 | 0.6525 | 0.6356 |
| 0.7196 | 7.0 | 7000 | 0.8204 | 0.6542 | 0.6352 |
| 0.6937 | 8.0 | 8000 | 0.8229 | 0.6608 | 0.6449 |
| 0.6493 | 9.0 | 9000 | 0.8427 | 0.6617 | 0.6450 |
| 0.6254 | 10.0 | 10000 | 0.8474 | 0.6667 | 0.6502 |
| 0.5911 | 11.0 | 11000 | 0.8611 | 0.6708 | 0.6551 |
| 0.5732 | 12.0 | 12000 | 0.8709 | 0.6667 | 0.6510 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
asquevedos/services-ucacue | asquevedos | 2024-05-23T21:01:45Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:BSC-LT/roberta-base-bne",
"base_model:finetune:BSC-LT/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-12T16:41:02Z | ---
license: apache-2.0
base_model: BSC-LT/roberta-base-bne
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: services-ucacue
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# services-ucacue
This model is a fine-tuned version of [BSC-LT/roberta-base-bne](https://huggingface.co/BSC-LT/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5312
- Accuracy: 0.8260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 48
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3007 | 1.0 | 158 | 0.6027 | 0.7798 |
| 0.5036 | 2.0 | 316 | 0.4827 | 0.8213 |
| 0.3994 | 3.0 | 474 | 0.4975 | 0.8213 |
| 0.2731 | 4.0 | 632 | 0.4928 | 0.8181 |
| 0.2132 | 5.0 | 790 | 0.5312 | 0.8260 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
soukainaedr1222/ChatPhi3 | soukainaedr1222 | 2024-05-23T21:00:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-23T20:37:16Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fine-tuned/NFCorpus-8-8-gpt-4o-2024-05-13-847943 | fine-tuned | 2024-05-23T20:58:49Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"Medical",
"Nutrition",
"Information",
"Retrieval",
"Dataset",
"en",
"dataset:fine-tuned/NFCorpus-8-8-gpt-4o-2024-05-13-847943",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-23T20:57:57Z | ---
license: apache-2.0
datasets:
- fine-tuned/NFCorpus-8-8-gpt-4o-2024-05-13-847943
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Medical
- Nutrition
- Information
- Retrieval
- Dataset
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
academic search for medical information retrieval
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/NFCorpus-8-8-gpt-4o-2024-05-13-847943',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
faisalq/bert-base-arapoembert | faisalq | 2024-05-23T20:47:23Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Arabic BERT",
"Poetry",
"Masked Langauge Model",
"ar",
"arxiv:2403.12392",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-15T08:15:28Z | ---
license: cc-by-nc-4.0
language:
- ar
tags:
- Arabic BERT
- Poetry
- Masked Langauge Model
---
**AraPoemBERT** is the first pre-trained large language model focused exclusively on Arabic poetry. The dataset used in pretraining the model contains more than 2 million verses. The code files along with the results are available on [repo](https://github.com/FaisalQarah/araPoemBERT).
# BibTex
If you use SaudiBERT model in your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (citation details to be updated):
```bibtex
@article{qarah2024arapoembert,
title={AraPoemBERT: A Pretrained Language Model for Arabic Poetry Analysis},
author={Qarah, Faisal},
journal={arXiv preprint arXiv:2403.12392},
year={2024}
}
```
|
hgnoi/Q8DO3oiJKPwtihU3 | hgnoi | 2024-05-23T20:42:44Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T20:41:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
johnnyf/a2c-PandaReachDense-v3 | johnnyf | 2024-05-23T20:40:18Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-23T16:59:19Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mlx-community/aya-23-8B-4bit | mlx-community | 2024-05-23T20:35:05Z | 101 | 2 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"mlx",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T14:45:40Z | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
library_name: transformers
tags:
- mlx
---
# mlx-community/aya-23-8B-4bit
The Model [mlx-community/aya-23-8B-4bit](https://huggingface.co/mlx-community/aya-23-8B-4bit) was converted to MLX format from [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B) using mlx-lm version **0.13.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/aya-23-8B-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
GenVRadmin/llama38bGenZ_Vikas-Merged | GenVRadmin | 2024-05-23T20:34:26Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:18:21Z | ---
license: mit
---
llama3 variant for 22 Indian languages:-
1. Tamil
2. Telugu
3. Assamese
4. Kashmiri
5. Punjabi
6. Bengali
7. Sanskrit
8. Malyalam
9. Sindhi
10. Marathi
11. Gujarati
12. Kannada
13. Odia
14. Maithili
15. Urdu
16. Nepali
17. Manipuri
18. Dogri
19. English
20. Arabic
21. Santali
22. Bodo
We first pre-trained the model on 100 million plus Indic language tokens.
Then, it was finetuned on close sourced GenZ_Vikas datasets consisting 7.5 million SFT pairs, including 5.5 million Hindi SFT pairs.
Finally it underwent DPO training to align it with human preferences.
The model has been benchmarked on Indic LLM leaderboard where it outperforms our AryaBhatta series on Hindi evals.
And llama3 base model on all Indian languages.
Training happened on 2*A100 for 24 days.
Link: https://huggingface.co/spaces/Cognitive-Lab/indic_llm_leaderboard
Release link: https://www.linkedin.com/feed/update/urn:li:activity:7199506579828662272 |
lenatr99/loha_fine_tuned_rte_XLMroberta | lenatr99 | 2024-05-23T20:31:22Z | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:adapter:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-05-23T20:31:14Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: xlm-roberta-base
metrics:
- accuracy
- f1
model-index:
- name: loha_fine_tuned_rte_XLMroberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loha_fine_tuned_rte_XLMroberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4748
- Accuracy: 0.6897
- F1: 0.6873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.8438 | 1.7241 | 50 | 0.7276 | 0.4138 | 0.2422 |
| 0.7427 | 3.4483 | 100 | 0.6836 | 0.5862 | 0.4333 |
| 0.6788 | 5.1724 | 150 | 0.9009 | 0.4828 | 0.3781 |
| 0.5085 | 6.8966 | 200 | 1.6699 | 0.5172 | 0.4944 |
| 0.2264 | 8.6207 | 250 | 2.0941 | 0.6207 | 0.6179 |
| 0.094 | 10.3448 | 300 | 2.0207 | 0.6897 | 0.6687 |
| 0.0286 | 12.0690 | 350 | 2.3929 | 0.6552 | 0.6491 |
| 0.004 | 13.7931 | 400 | 2.4748 | 0.6897 | 0.6873 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
rubenamtz0/hc-mistral-alpaca | rubenamtz0 | 2024-05-23T20:28:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-22T18:51:39Z | ---
license: llama3
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B
model-index:
- name: hc-mistral-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: _synth_data/alpaca_synth_queries_healed_tiny.jsonl
type: sharegpt
conversation: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./lora-alpaca-out
hub_model_id: rubenamtz0/hc-mistral-alpaca
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: fine-tuning
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
eval_sample_packing: false
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# hc-mistral-alpaca
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5663 | 1.0 | 1 | 1.6992 |
| 1.5706 | 1.5 | 2 | 1.6898 |
| 0.785 | 2.0 | 3 | 1.6674 |
| 0.7713 | 2.5 | 4 | 1.5970 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Subsets and Splits