modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 06:27:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
leeloolee/boosted-qwen2-7b | leeloolee | 2024-07-17T14:38:43Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T06:09:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sinequa/passage-ranker.pistachio | sinequa | 2024-07-17T14:37:27Z | 252 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"de",
"en",
"es",
"fr",
"it",
"ja",
"nl",
"pt",
"zh",
"pl",
"arxiv:1901.04085",
"arxiv:1611.09268",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-11T14:02:06Z | ---
language:
- de
- en
- es
- fr
- it
- ja
- nl
- pt
- zh
- pl
---
# Model Card for `passage-ranker.pistachio`
This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results.
Model name: `passage-ranker.pistachio`
## Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
- Italian
- Dutch
- Japanese
- Portuguese
- Chinese (simplified)
- Polish
Besides the aforementioned languages, basic support can be expected for additional 93 languages that were used during the pretraining of the base model (see
[list of languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages)).
## Scores
| Metric | Value |
|:----------------------------|------:|
| English Relevance (NDCG@10) | 0.474 |
| Polish Relevance (NDCG@10) | 0.380 |
Note that the relevance score is computed as an average over several retrieval datasets (see
[details below](#evaluation-metrics)).
## Inference Times
| GPU | Quantization type | Batch size 1 | Batch size 32 |
|:------------------------------------------|:------------------|---------------:|---------------:|
| NVIDIA A10 | FP16 | 2 ms | 28 ms |
| NVIDIA A10 | FP32 | 4 ms | 82 ms |
| NVIDIA T4 | FP16 | 3 ms | 65 ms |
| NVIDIA T4 | FP32 | 14 ms | 369 ms |
| NVIDIA L4 | FP16 | 3 ms | 38 ms |
| NVIDIA L4 | FP32 | 5 ms | 123 ms |
## Gpu Memory usage
| Quantization type | Memory |
|:-------------------------------------------------|-----------:|
| FP16 | 850 MiB |
| FP32 | 1200 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.
## Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
- [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
## Model Details
### Overview
- Number of parameters: 167 million
- Base language model: [Multilingual BERT-Base](https://huggingface.co/bert-base-multilingual-uncased)
- Insensitive to casing and accents
- Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
### Training Data
- MS MARCO Passage Ranking
([Paper](https://arxiv.org/abs/1611.09268),
[Official Page](https://microsoft.github.io/msmarco/),
[English & translated datasets on the HF dataset hub](https://huggingface.co/datasets/unicamp-dl/mmarco), [translated dataset in Polish on the HF dataset hub](https://huggingface.co/datasets/clarin-knext/msmarco-pl))
- Original English dataset
- Translated datasets for the other nine supported languages
### Evaluation Metrics
##### English
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
| Dataset | NDCG@10 |
|:------------------|--------:|
| Average | 0.474 |
| | |
| Arguana | 0.539 |
| CLIMATE-FEVER | 0.230 |
| DBPedia Entity | 0.369 |
| FEVER | 0.765 |
| FiQA-2018 | 0.329 |
| HotpotQA | 0.694 |
| MS MARCO | 0.413 |
| NFCorpus | 0.337 |
| NQ | 0.486 |
| Quora | 0.714 |
| SCIDOCS | 0.144 |
| SciFact | 0.649 |
| TREC-COVID | 0.651 |
| Webis-Touche-2020 | 0.312 |
#### Polish
This model has polish capacities, that are being evaluated over a subset of
the [PIRBenchmark](https://github.com/sdadas/pirb) with BM25 as the first stage retrieval.
| Dataset | NDCG@10 |
|:--------------|--------:|
| Average | 0.380 |
| | |
| arguana-pl | 0.285 |
| dbpedia-pl | 0.283 |
| fiqa-pl | 0.223 |
| hotpotqa-pl | 0.603 |
| msmarco-pl | 0.259 |
| nfcorpus-pl | 0.293 |
| nq-pl | 0.355 |
| quora-pl | 0.613 |
| scidocs-pl | 0.128 |
| scifact-pl | 0.581 |
| trec-covid-pl | 0.560 |
#### Other languages
We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its
multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics
for the existing languages.
| Language | NDCG@10 |
|:----------------------|--------:|
| French | 0.439 |
| German | 0.418 |
| Spanish | 0.487 |
| Japanese | 0.517 |
| Chinese (simplified) | 0.454 | |
Diluzx/gemma-2-9b-bnb-4bit | Diluzx | 2024-07-17T14:37:06Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-07-17T13:35:51Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
---
# Uploaded model
- **Developed by:** Diluzx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
anasmkh/fintuned_pythia_ubuntu_commands | anasmkh | 2024-07-17T14:25:06Z | 167 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T14:02:52Z | ---
library_name: transformers
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Anas Mkh]
- **Funded by [optional]:** []
- **Shared by [optional]:** [Anas Mkh]
- **Model type:** [Question Answering]
- **Language(s) (NLP):** [English]
- **License:** [NA]
- **Finetuned from model [optional]:** [pythia-410m]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [anasmkh/fintuned_pythia_ubuntu_commands]
- **Paper [optional]:** [NA]
- **Demo [optional]:** [NA]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
4Fun2Old/Kai_v1-DiscoPhoenix | 4Fun2Old | 2024-07-17T14:22:23Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:DRXD1000/Phoenix-7B",
"base_model:merge:DRXD1000/Phoenix-7B",
"base_model:Keynote-Technology/KAI-7B-v0.1",
"base_model:merge:Keynote-Technology/KAI-7B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T14:11:03Z | ---
base_model:
- Keynote-Technology/KAI-7B-v0.1
- DRXD1000/Phoenix
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Keynote-Technology/KAI-7B-v0.1](https://huggingface.co/Keynote-Technology/KAI-7B-v0.1)
* [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Keynote-Technology/KAI-7B-v0.1
layer_range: [0, 32]
- model: DRXD1000/Phoenix
layer_range: [0, 32]
merge_method: slerp
base_model: Keynote-Technology/KAI-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
|
vitamin-c-sharp/multiview-diffusion-generator | vitamin-c-sharp | 2024-07-17T14:17:04Z | 16 | 0 | diffusers | [
"diffusers",
"safetensors",
"image-to-3d",
"arxiv:2312.02201",
"license:openrail",
"diffusers:MVDreamPipeline",
"region:us"
]
| image-to-3d | 2024-07-17T13:54:28Z | ---
license: openrail
pipeline_tag: image-to-3d
---
This is a copy of [ashawkey/imagedream-ipmv-diffusers](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers).
It is hosted here for persistence throughout the ML for 3D course.
# MVDream-diffusers Model Card
This is a port of https://huggingface.co/Peng-Wang/ImageDream into diffusers.
For usage, please check: https://github.com/ashawkey/mvdream_diffusers
## Citation
```
@article{wang2023imagedream,
title={ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation},
author={Wang, Peng and Shi, Yichun},
journal={arXiv preprint arXiv:2312.02201},
year={2023}
}
```
## Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
|
Dichitha/bert_mrpc_trained | Dichitha | 2024-07-17T14:16:44Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T13:46:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mahendra0203/whisper-small-hi-test | mahendra0203 | 2024-07-17T14:15:02Z | 78 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-07-17T14:07:40Z | ---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hindi test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hindi test
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.4
- Pytorch 2.2.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
gokulsrinivasagan/gpt_132 | gokulsrinivasagan | 2024-07-17T14:15:01Z | 113 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:gokuls/wiki_book_corpus_complete_raw_dataset",
"base_model:gokulsrinivasagan/gpt_120",
"base_model:finetune:gokulsrinivasagan/gpt_120",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T00:39:20Z | ---
license: mit
base_model: gokulsrinivasagan/gpt_120
tags:
- generated_from_trainer
datasets:
- gokuls/wiki_book_corpus_complete_raw_dataset
metrics:
- accuracy
model-index:
- name: gpt_132
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: gokuls/wiki_book_corpus_complete_raw_dataset
type: gokuls/wiki_book_corpus_complete_raw_dataset
metrics:
- name: Accuracy
type: accuracy
value: 0.38595119982890264
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/gokulsrinivasagan/huggingface/runs/bbd1d9wi)
# gpt_132
This model is a fine-tuned version of [gokulsrinivasagan/gpt_120](https://huggingface.co/gokulsrinivasagan/gpt_120) on the gokuls/wiki_book_corpus_complete_raw_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0759
- Accuracy: 0.3860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
speechbrain/hifigan-wavlm-l1-3-7-12-18-23-continuous-LibriTTS | speechbrain | 2024-07-17T14:13:15Z | 33 | 0 | speechbrain | [
"speechbrain",
"Vocoder",
"HiFIGAN",
"speech-synthesis",
"en",
"dataset:LibriTTS",
"arxiv:2406.10735",
"arxiv:2406.14294",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-22T08:06:05Z | ---
language: "en"
inference: false
tags:
- Vocoder
- HiFIGAN
- speech-synthesis
- speechbrain
license: "apache-2.0"
datasets:
- LibriTTS
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Vocoder with HiFIGAN Unit trained on LibriTTS
This repository provides all the necessary tools for using a [scalable HiFiGAN Unit](https://arxiv.org/abs/2406.10735) vocoder trained with [LibriTTS](https://www.openslr.org/141/).
The pre-trained model take as input continous self-supervised representations and produces a waveform as output. This is suitable for a wide range of generative tasks such as speech enhancement, separation, text-to-speech, voice cloning, etc. Please read [DASB - Discrete Audio and Speech Benchmark](https://arxiv.org/abs/2406.14294) for more information.
To generate the continuous self-supervised representations, we use `microsoft/wavlm-large`.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Using the Vocoder
```python
import torch
from speechbrain.inference.vocoders import UnitHIFIGAN
hifi_gan_unit = UnitHIFIGAN.from_hparams(source="speechbrain/hifigan-wavlm-l1-3-7-12-18-23-continuous-LibriTTS", savedir="pretrained_models/vocoder")
codes = torch.rand(100, 1, 1024)
waveform = hifi_gan_unit.decode_unit(codes)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain |
aritrasen/bge-base-en-v1.5-ft | aritrasen | 2024-07-17T14:13:02Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:21",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-07-17T14:12:45Z | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:21
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: '| Config | Model |
Epochs | Max seq length | Micro batch size | Machine | Training runtime | Cost
| Peak memory | Validation loss | Validation perplexity | Multitask score (MMLU)
|
| --------------------------------- | ---------------------- | ------ | --------------
| ---------------- | ------- | ---------------- | ---- | ----------- | ---------------
| --------------------- | --------------- |
| falcon-7b/lora.yaml | falcon-7b | 4 | 512 |
1 | 1xA10G | 24.84 min | $0.7 | 16.69 GB | 0.945 |
2.573 | 26.2% |
| falcon-7b/lora.yaml | falcon-7b | 4 | 512 |
1 | 4xA10G | 24.94 min | $2.0 | 16.69 GB | 0.945 |
2.573 | 26.4% |
| falcon-7b/qlora.yaml | falcon-7b | 4 | 512 |
1 | 1xA10G | 50.85 min | $1.5 | 9.44 GB | 0.993 |
2.699 | 26.3% |
| falcon-7b/qlora.yaml | falcon-7b | 4 | 512 |
1 | 4xA10G | 50.88 min | $4.1 | 9.44 GB | 0.993 |
2.699 | 26.3% |
| | | | | | | | | | | | |
| gemma-2b/full.yaml | gemma-2b | 1 | 512 |
1 | 4xA10G | 14.06 min | $1.1 | 17.43 GB | 1.021 |
2.777 | 32.4% |
| gemma-2b/lora.yaml | gemma-2b | 2 | 512 |
2 | 1xA10G | 9.41 min | $0.3 | 12.62 GB | 0.981 |
2.666 | 34.4% |'
sentences:
- 'What is the command to download the pretrained model weights for the Llama-2-7b-hf
model?
'
- 'What is the version of nvfuser\_cu121 used?
'
- 'What is the training runtime for the gemma-2b model with the lora configuration?
'
- source_sentence: "# Serve and Deploy LLMs\n\nThis document shows how you can serve\
\ a LitGPT for deployment. \n\n \n## Serve an LLM\n\nThis section illustrates\
\ how we can set up an inference server for a phi-2 LLM using `litgpt serve` that\
\ is minimal and highly scalable.\n\n\n \n## Step 1: Start the inference\
\ server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your\
\ own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start\
\ the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\
\n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including\
\ the port, devices, LLM temperature setting, and more.\n\n\n \n## Step 2:\
\ Query the inference server\n\nYou can now send requests to the inference server\
\ you started in step 2. For example, in a new Python session, we can send requests\
\ to the inference server as follows:\n\n\n```python\nimport requests, json\n\n\
response = requests.post(\n \"http://127.0.0.1:8000/predict\", \n json={\"\
prompt\": \"Fix typos in the following sentence: Exampel input\"}\n)\n\nprint(response.json()[\"\
output\"])\n```\n\nExecuting the code above prints the following output:\n\n```\n\
Instruct: Fix typos in the following sentence: Exampel input\nOutput: Example\
\ input.\n```"
sentences:
- 'What command do I use to convert the finetuned model to a HF transformer model?
'
- 'How do you merge LoRA weights into the original model''s checkpoint?
'
- 'How can I start an inference server for a phi-2 LLM using litgpt serve?
'
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("aritrasen/bge-base-en-v1.5-ft")
# Run inference
sentences = [
'# Serve and Deploy LLMs\n\nThis document shows how you can serve a LitGPT for deployment. \n\n \n## Serve an LLM\n\nThis section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.\n\n\n \n## Step 1: Start the inference server\n\n\n```bash\n# 1) Download a pretrained model (alternatively, use your own finetuned model)\nlitgpt download --repo_id microsoft/phi-2\n\n# 2) Start the server\nlitgpt serve --checkpoint_dir checkpoints/microsoft/phi-2\n```\n\n> [!TIP]\n> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.\n\n\n \n## Step 2: Query the inference server\n\nYou can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:\n\n\n```python\nimport requests, json\n\nresponse = requests.post(\n "http://127.0.0.1:8000/predict", \n json={"prompt": "Fix typos in the following sentence: Exampel input"}\n)\n\nprint(response.json()["output"])\n```\n\nExecuting the code above prints the following output:\n\n```\nInstruct: Fix typos in the following sentence: Exampel input\nOutput: Example input.\n```',
'How can I start an inference server for a phi-2 LLM using litgpt serve?\n',
'What command do I use to convert the finetuned model to a HF transformer model?\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 21 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 424.62 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 17.19 tokens</li><li>max: 26 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| <code>| 7 B | Llama 2 | bnb.nf4 | 1 | 4,194,304 | 14.14 GB | 3.68 min |<br>| 7 B | Llama 2 | bnb.nf4-dq | 1 | 4,194,304 | 13.84 GB | 3.83 min |<br>| 7 B | Llama 2 | None | 2 | 4,194,304 | 29.07 GB | 2.52 min |<br>| 7 B | Llama 2 | None | 4 | 4,194,304 | OOM | - |<br>| | | | | | | |<br>| 13 B | Llama 2 | None | 1 | 6,553,600 | 38.12 GB | 3.19 min |<br>| 13 B | Llama 2 | bnb.nf4 | 1 | 6,553,600 | 23.14 GB | 6.38 min |<br>| 13 B | Llama 2 | bnb.nf4-dq | 1 | 6,553,600 | 22.55 GB | 6.55 min |<br>| 13 B | Llama 2 | None | 2 | 6,553,600 | OOM | - |<br>| 13 B | Llama 2 | None | 4 | 6,553,600 | OOM | - |<br>| | | | | | | |<br>| 40 B | Falcon | None | 1 | 12,042,240 | OOM | - |<br>| 40 B | Falcon | bnb.nf4 | 1 | 12,042,240 | OOM | - |<br>| 40 B | Falcon | bnb.nf4-dq | 1 | 12,042,240 | OOM | - |</code> | <code>What is the memory usage of Llama 2 with 7B when using bnb.nf4-dq?<br></code> |
| <code>1. Follow the instructions above to load the model into a Hugging Face transformers model.<br><br>2. Create a `model.safetensor` file:<br><br>```python<br>model.save_pretrained("out/hf-tinyllama/converted/")<br>```<br><br>3. Copy the tokenizer files into the model-containing directory:<br><br>```bash<br>cp checkpoints/$repo_id/tokenizer* out/hf-tinyllama/converted<br>```<br><br>4. Run the evaluation harness, for example:<br><br>```bash<br>lm_eval --model hf \<br> --model_args pretrained=out/hf-tinyllama/converted \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>```</code> | <code>What is the command to run the evaluation harness?<br></code> |
| <code>The LM Evaluation Harness requires a tokenizer to be present in the model checkpoint folder, which we can copy from the original download checkpoint:<br><br>```bash<br># Copy the tokenizer needed by the Eval Harness<br>cp checkpoints/microsoft/phi-2/tokenizer*<br>out/converted_model<br>```<br><br>Then, we can run the Evaluation Harness as follows:<br><br>```bash<br>lm_eval --model hf \<br> --model_args pretrained="out/converted_model" \<br> --tasks "hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge" \<br> --device "cuda:0" \<br> --batch_size 4<br>```<br><br> <br><br>> [!TIP]<br>> The Evaluation Harness tasks above are those used in Open LLM Leaderboard. You can find a list all supported tasks [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/master/docs/task_table.md).<br><br><br><br> <br>**More information and additional resources**<br><br>- [tutorials/convert_lit_models](./convert_lit_models.md): Tutorial on converting LitGPT weights<br><br><br><br> <br><br>## Get involved!<br><br>We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the [GitHub Issue](https://github.com/Lightning-AI/litgpt/issues) tracker.<br><br>We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.<br><br> <br><br>> [!TIP]<br>> Unsure about contributing? Check out our [How to Contribute to LitGPT](https://lightning.ai/pages/community/tutorial/how-to-contribute-to-litgpt/) guide.<br><br> <br><br>If you have general questions about building with LitGPT, please [join our Discord](https://discord.gg/VptPCZkGNa).</code> | <code>What is the command to run the Evaluation Harness?<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 10 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 273 tokens</li><li>mean: 460.8 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.1 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
| <code>(this table was sourced from the author's [README](https://github.com/jzhang38/TinyLlama/))<br><br> <br>## Download datasets<br><br>You can download the data using git lfs:<br><br>```bash<br># Make sure you have git-lfs installed (https://git-lfs.com):<br>sudo apt install git-lfs<br>```<br><br>```bash<br>git clone https://huggingface.co/datasets/cerebras/slimpajama-627b data/slimpajama-raw<br>git clone https://huggingface.co/datasets/bigcode/starcoderdata data/starcoderdata-raw<br>```<br><br>Around 1.2 TB of disk space is required to store both datasets.<br><br> <br>## Prepare the datasets for training<br><br>In order to start pretraining litgpt on it, you need to read, tokenize, and write the data in binary chunks. This will leverage the `litdata` optimization pipeline and streaming dataset.<br><br>First, install additional dependencies for preprocessing:<br><br>```bash<br>pip install '.[all]'<br>```<br><br>You will need to have the tokenizer config available:<br><br>```bash<br>litgpt download \<br> --repo_id meta-llama/Llama-2-7b-hf \<br> --access_token your_hf_token \<br> --tokenizer_only true<br>```<br><br>Then, run the preprocessing script for each dataset and split.<br>You will require **1.1 TB** of disk space for Starcoder and **2.5** TB of space for the SlimPajama dataset.<br><br>**Starcoder:**<br><br>```bash<br>python litgpt/data/prepare_starcoder.py \<br> --input_dir data/starcoderdata-raw \<br> --output_dir data/starcoder \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>```<br><br>**SlimPajama:**<br><br>```bash<br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/validation \<br> --output_dir data/slimpajama/val \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/test \<br> --output_dir data/slimpajama/test \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br><br>python litgpt/data/prepare_slimpajama.py \<br> --input_dir data/slimpajama-raw/train \<br> --output_dir data/slimpajama/train \<br> --tokenizer_path checkpoints/meta-llama/Llama-2-7b-hf<br>```</code> | <code>How much disk space is required to store the SlimPajama dataset?<br></code> |
| <code># Serve and Deploy LLMs<br><br>This document shows how you can serve a LitGPT for deployment. <br><br> <br>## Serve an LLM<br><br>This section illustrates how we can set up an inference server for a phi-2 LLM using `litgpt serve` that is minimal and highly scalable.<br><br><br> <br>## Step 1: Start the inference server<br><br><br>```bash<br># 1) Download a pretrained model (alternatively, use your own finetuned model)<br>litgpt download --repo_id microsoft/phi-2<br><br># 2) Start the server<br>litgpt serve --checkpoint_dir checkpoints/microsoft/phi-2<br>```<br><br>> [!TIP]<br>> Use `litgpt serve --help` to display additional options, including the port, devices, LLM temperature setting, and more.<br><br><br> <br>## Step 2: Query the inference server<br><br>You can now send requests to the inference server you started in step 2. For example, in a new Python session, we can send requests to the inference server as follows:<br><br><br>```python<br>import requests, json<br><br>response = requests.post(<br> "http://127.0.0.1:8000/predict", <br> json={"prompt": "Fix typos in the following sentence: Exampel input"}<br>)<br><br>print(response.json()["output"])<br>```<br><br>Executing the code above prints the following output:<br><br>```<br>Instruct: Fix typos in the following sentence: Exampel input<br>Output: Example input.<br>```</code> | <code>How can I start an inference server for a phi-2 LLM using litgpt serve?<br></code> |
| <code># TPU support<br><br>This project utilizes [`Fabric`](https://lightning.ai/docs/fabric/stable), which supports TPUs via [PyTorch XLA](https://github.com/pytorch/xla).<br><br>> [!NOTE]<br>> This guide assumes that you have already set-up your [Google Cloud environment](https://cloud.google.com/run/docs/setup).<br><br>To set up a Google Cloud instance with a TPU v4 VM, run the following commands:<br><br>```shell<br>gcloud compute tpus tpu-vm create litgpt --version=tpu-vm-v4-base --accelerator-type=v4-8 --zone=us-central2-b<br>gcloud compute tpus tpu-vm ssh litgpt --zone=us-central2-b<br>```<br><br>You can also choose a different TPU type. To do so, change the `version`, `accelerator-type`, and `zone` arguments. Find all regions and zones [here](https://cloud.google.com/tpu/docs/regions-zones).<br><br><details><br><summary>Multihost caveats</summary><br><br>TPU v4-8 uses a single host. SSH'ing into the machine and running commands manually will only work when using a single host (1 slice in the TPU pod).<br>In multi-host environments, such as larger TPU pod slices, it's necessary to launch all commands on all hosts simultaneously to avoid hangs.<br>For local development, it is advisable to upload a zip file containing all your current changes and execute it inside the VM from your personal computer:<br><br>```shell<br># Zip the local directory, excluding large directories from the zip. You may want to keep them.<br>zip -r local_changes.zip . -x ".git/*" "checkpoints/*" "data/*" "out/*"<br># Copy the .zip file to the TPU VM<br>gcloud compute tpus tpu-vm scp --worker=all local_changes.zip "litgpt:~"<br># Unzip on each host<br>gcloud compute tpus tpu-vm ssh litgpt --worker=all --command="cd ~; unzip -q -o local_changes.zip"<br><br># Example of a typical workflow<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash install_dependencies.sh"<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash prepare_checkpoints.sh"<br>gcloud compute tpus tpu-vm ssh tmp --worker=all --command="cd ~; bash run_desired_script.sh"</code> | <code>How does this project support TPUs?<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss |
|:-----:|:----:|:-------------:|:------:|
| 0.4 | 2 | 0.6407 | 0.4190 |
| 0.8 | 4 | 0.7873 | 0.2789 |
| 1.2 | 6 | 0.1871 | 0.2089 |
| 1.6 | 8 | 0.2125 | 0.1718 |
| 2.0 | 10 | 0.0374 | 0.1648 |
| 2.4 | 12 | 0.1923 | 0.1695 |
| 2.8 | 14 | 0.0183 | 0.1723 |
| 3.2 | 16 | 0.1582 | 0.1770 |
| 3.6 | 18 | 0.0032 | 0.1824 |
| 4.0 | 20 | 0.0015 | 0.1870 |
| 4.4 | 22 | 0.1399 | 0.1901 |
| 4.8 | 24 | 0.002 | 0.1914 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.27.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
John6666/pony-photo-real-merge-x3-v1-sdxl | John6666 | 2024-07-17T14:09:39Z | 108 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-07-17T14:02:51Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- pony
---
Original model is [here](https://civitai.com/models/584093/pony-photo-real-merge-x3?modelVersionId=651674).
|
shreyasfadnavis/proc_bert_merge_finmed | shreyasfadnavis | 2024-07-17T14:04:53Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:PAIXAI/Astrid-7B-LLama-Med",
"base_model:merge:PAIXAI/Astrid-7B-LLama-Med",
"base_model:cxllin/Llama2-7b-Finance",
"base_model:merge:cxllin/Llama2-7b-Finance",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T13:30:28Z | ---
base_model:
- cxllin/Llama2-7b-Finance
- PAIXAI/Astrid-7B-LLama-Med
library_name: transformers
tags:
- mergekit
- merge
---
# proc_merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [cxllin/Llama2-7b-Finance](https://huggingface.co/cxllin/Llama2-7b-Finance)
* [PAIXAI/Astrid-7B-LLama-Med](https://huggingface.co/PAIXAI/Astrid-7B-LLama-Med)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: PAIXAI/Astrid-7B-LLama-Med
parameters:
weight: 0.5
- model: cxllin/Llama2-7b-Finance
parameters:
weight: 0.5
merge_method: linear
dtype: float16
```
|
speechbrain/hifigan-wav2vec-l1-3-7-12-18-23-k1000-LibriTTS | speechbrain | 2024-07-17T14:04:40Z | 11 | 0 | speechbrain | [
"speechbrain",
"Vocoder",
"HiFIGAN",
"speech-synthesis",
"en",
"dataset:LibriTTS",
"arxiv:2406.10735",
"arxiv:2406.14294",
"license:apache-2.0",
"region:us"
]
| null | 2024-05-22T08:04:16Z | ---
language: "en"
inference: false
tags:
- Vocoder
- HiFIGAN
- speech-synthesis
- speechbrain
license: "apache-2.0"
datasets:
- LibriTTS
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Vocoder with HiFIGAN Unit trained on LibriTTS
This repository provides all the necessary tools for using a [scalable HiFiGAN Unit](https://arxiv.org/abs/2406.10735) vocoder trained with [LibriTTS](https://www.openslr.org/141/).
The pre-trained model take as input discrete self-supervised representations and produces a waveform as output. This is suitable for a wide range of generative tasks such as speech enhancement, separation, text-to-speech, voice cloning, etc. Please read [DASB - Discrete Audio and Speech Benchmark](https://arxiv.org/abs/2406.14294) for more information.
To generate the discrete self-supervised representations, we employ a K-means clustering model trained using `facebook/wav2vec2-large-960h-lv60-self` hidden layers, with k=1000.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Using the Vocoder
```python
import torch
from speechbrain.inference.vocoders import UnitHIFIGAN
hifi_gan_unit = UnitHIFIGAN.from_hparams(source="speechbrain/hifigan-wav2vec-l1-3-7-12-18-23-k1000-LibriTTS", savedir="pretrained_models/vocoder")
codes = torch.randint(0, 99, (100, 1))
waveform = hifi_gan_unit.decode_unit(codes)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain |
ArrayDice/car_orientation_classification | ArrayDice | 2024-07-17T14:01:16Z | 26 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-07-17T09:21:35Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: car_orientation_classification2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# car_orientation_classification2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6800
- Accuracy: 0.6926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9933 | 1.0 | 68 | 1.9084 | 0.4099 |
| 1.4721 | 2.0 | 136 | 1.2870 | 0.5124 |
| 1.1677 | 3.0 | 204 | 1.0780 | 0.5265 |
| 0.9919 | 4.0 | 272 | 0.9454 | 0.5760 |
| 0.8392 | 5.0 | 340 | 0.8184 | 0.6926 |
| 0.7778 | 6.0 | 408 | 0.8311 | 0.6431 |
| 0.7341 | 7.0 | 476 | 0.7425 | 0.6572 |
| 0.6695 | 8.0 | 544 | 0.6800 | 0.6926 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Deeokay/Phi-3-medium-4k-MYP-GGUF | Deeokay | 2024-07-17T13:54:08Z | 34 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-07-17T13:12:43Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Deeokay
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# README
This is a test model on a the following
- a private dataset focused for Students in MYP (IB) Program for my niece
- Works with Ollama create with just "FROM path/to/model" as Modelfile (standard template works no issues)
# HOW TO USE
The whole point of conversion for me was I wanted to be able to to use it through Ollama or (other local options)
For Ollama, it required to be a GGUF file. Once you have this it is pretty straight forward (if it is in llama3 which this model is)
Quick Start:
- You must already have Ollama running in your setting
- Download the unsloth.Q4_K_M.gguf model from Files
- In the same directory create a file call "Modelfile"
- Inside the "Modelfile" type
```python
FROM ./unsloth.Q4_K_M.gguf # or which ever GGUF file
```
- Save a go back to the folder (folder where model + Modelfile exisit)
- Now in terminal make sure you are in the same location of the folder and type in the following command
```python
ollama create mycustomai # "mycustomai" <- you can name it anything u want
```
This GGUF is based on unsloth/Phi-3-medium-4k-instruct thus ollama doesn't need anything else to auto configure this model
After than you should be able to use this model to chat!
# NOTE: DISCLAIMER
Please note this is not for the purpose of production, but results of self tought Fine Tuning
The Special Tokens where kept the same and the training data has the following Template:
```
<s><|user|>{question}<|end|>
<|assistant|>{answer}<|end|></s>
```
|
kingabzpro/llama-3-8b-chat-doctor-Q4_K_M-GGUF | kingabzpro | 2024-07-17T13:52:29Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"llama-cpp",
"gguf-my-repo",
"question-answering",
"en",
"dataset:ruslanmv/ai-medical-chatbot",
"base_model:kingabzpro/llama-3-8b-chat-doctor",
"base_model:quantized:kingabzpro/llama-3-8b-chat-doctor",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| question-answering | 2024-07-17T13:52:07Z | ---
base_model: kingabzpro/llama-3-8b-chat-doctor
datasets:
- ruslanmv/ai-medical-chatbot
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: question-answering
tags:
- medical
- llama-cpp
- gguf-my-repo
---
# kingabzpro/llama-3-8b-chat-doctor-Q4_K_M-GGUF
This model was converted to GGUF format from [`kingabzpro/llama-3-8b-chat-doctor`](https://huggingface.co/kingabzpro/llama-3-8b-chat-doctor) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kingabzpro/llama-3-8b-chat-doctor) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo kingabzpro/llama-3-8b-chat-doctor-Q4_K_M-GGUF --hf-file llama-3-8b-chat-doctor-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kingabzpro/llama-3-8b-chat-doctor-Q4_K_M-GGUF --hf-file llama-3-8b-chat-doctor-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo kingabzpro/llama-3-8b-chat-doctor-Q4_K_M-GGUF --hf-file llama-3-8b-chat-doctor-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo kingabzpro/llama-3-8b-chat-doctor-Q4_K_M-GGUF --hf-file llama-3-8b-chat-doctor-q4_k_m.gguf -c 2048
```
|
Deeokay/Phi-3-medium-4k-MYP-4bit | Deeokay | 2024-07-17T13:51:53Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T13:41:44Z | ---
base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
# Uploaded model
- **Developed by:** Deeokay
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sajjad-underline/ppo-Huggy | Sajjad-underline | 2024-07-17T13:38:54Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2024-07-17T13:38:41Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Sajjad-underline/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
Tinsae/Florence-Fish-8 | Tinsae | 2024-07-17T13:35:42Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-07-16T21:38:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
udomsak/layoutlmv3-document-classification | udomsak | 2024-07-17T13:34:06Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T13:33:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Felladrin/gguf-Qwen1.5-0.5B-Chat | Felladrin | 2024-07-17T13:30:46Z | 155 | 2 | null | [
"gguf",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:quantized:Qwen/Qwen1.5-0.5B-Chat",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-04-29T11:27:19Z | ---
base_model: Qwen/Qwen1.5-0.5B-Chat
---
GGUF version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat).
|
botbot-ai/CabraLlama3-70b | botbot-ai | 2024-07-17T13:23:43Z | 274 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"portuguese",
"cabra",
"llama-3",
"conversational",
"pt",
"dataset:botbot-ai/Cabra3k",
"license:llama3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-06-19T16:03:03Z | ---
language:
- pt
license: llama3
library_name: transformers
tags:
- portuguese
- llama
- cabra
- llama-3
datasets:
- botbot-ai/Cabra3k
model-index:
- name: CabraLlama3-70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 82.02
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 70.1
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 68.52
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 93.21
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 83.32
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 80.6
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 81.62
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 72.72
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 73.85
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraLlama3-70b
name: Open Portuguese LLM Leaderboard
---
# Cabra Llama-3 70B
O Cabra Llama-3 70B Γ© uma versΓ£o aprimorada do Meta Llama 3 70B Instruct, refinado com o uso do dataset Cabra 30k. Este modelo foi especialmente otimizado para compreender e responder em portuguΓͺs (pt-br).
**ConheΓ§a os nossos outros [modelos e datasets](https://huggingface.co/collections/botbot-ai/models-6604c2069ceef04f834ba99b), e o [Cabra Llama 3 8b](https://huggingface.co/botbot-ai/CabraLlama3-8b).**
## Detalhes do modelo base
### Modelo: Meta-Llama-3-70B-Instruct
A Meta desenvolveu e lanΓ§ou a famΓlia de modelos Llama 3, uma coleΓ§Γ£o de modelos de texto generativos prΓ©-treinados e ajustados por instruΓ§Γ΅es nos tamanhos de 8B e 70B. Os modelos Llama 3 ajustados por instruΓ§Γ΅es sΓ£o otimizados para casos de uso em diΓ‘logos e superam muitos dos modelos de chat de cΓ³digo aberto disponΓveis em benchmarks comuns da indΓΊstria. AlΓ©m disso, ao desenvolver esses modelos, tomamos grande cuidado para otimizar a utilidade e a seguranΓ§a.
Arquitetura do Modelo: Llama 3 Γ© um modelo de linguagem auto-regressivo que usa uma arquitetura de transformador otimizada. As versΓ΅es ajustadas utilizam o aprimoramento supervisionado (SFT) e aprendizado por reforΓ§o com feedback humano (RLHF) para se alinhar Γ s preferΓͺncias humanas quanto Γ utilidade e seguranΓ§a.
### Dataset: Cabra 30k
Dataset interno para fine-tuning. Vamos lanΓ§ar em breve.
### QuantizaΓ§Γ£o / GGUF
Colocamos diversas versΓ΅es (GGUF) quantanizadas no branch "quantanization".
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/botbot-ai/CabraLlama3-70b) and on the [π Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**78.44**|
|ENEM Challenge (No Images)| 82.02|
|BLUEX (No Images) | 70.10|
|OAB Exams | 68.52|
|Assin2 RTE | 93.21|
|Assin2 STS | 83.32|
|FaQuAD NLI | 80.60|
|HateBR Binary | 81.62|
|PT Hate Speech Binary | 72.72|
|tweetSentBR | 73.85|
|
CAUKiel/JavaBERT | CAUKiel | 2024-07-17T13:21:37Z | 212 | 12 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"code",
"arxiv:2110.10404",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z |
---
language:
- code
license: apache-2.0
widget:
- text: public [MASK] isOdd(Integer num) {if (num % 2 == 0) {return "even";} else
{return "odd";}}
---
# Model Card for JavaBERT
A BERT-like model pretrained on Java software code.
# Model Details
## Model Description
A BERT-like model pretrained on Java software code.
- **Developed by:** Christian-Albrechts-University of Kiel (CAUKiel)
- **Shared by [Optional]:** Hugging Face
- **Model type:** Fill-Mask
- **Language(s) (NLP):** en
- **License:** Apache-2.0
- **Related Models:** A version of this model using an uncased tokenizer is available at [CAUKiel/JavaBERT-uncased](https://huggingface.co/CAUKiel/JavaBERT-uncased).
- **Parent Model:** BERT
- **Resources for more information:**
- [Associated Paper](https://arxiv.org/pdf/2110.10404.pdf)
# Uses
## Direct Use
Fill-Mask
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
{ see paper= word something)
# Training Details
## Training Data
The model was trained on 2,998,345 Java files retrieved from open source projects on GitHub. A ```bert-base-cased``` tokenizer is used by this model.
## Training Procedure
### Training Objective
A MLM (Masked Language Model) objective was used to train this model.
### Preprocessing
More information needed.
### Speeds, Sizes, Times
More information needed.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed.
### Factors
### Metrics
More information needed.
## Results
More information needed.
# Model Examination
More information needed.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed.
- **Hours used:** More information needed.
- **Cloud Provider:** More information needed.
- **Compute Region:** More information needed.
- **Carbon Emitted:** More information needed.
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed.
## Compute Infrastructure
More information needed.
### Hardware
More information needed.
### Software
More information needed.
# Citation
**BibTeX:**
```
@inproceedings{De_Sousa_Hasselbring_2021,
address={Melbourne, Australia},
title={JavaBERT: Training a Transformer-Based Model for the Java Programming Language},
rights={https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html},
ISBN={9781665435833},
url={https://ieeexplore.ieee.org/document/9680322/},
DOI={10.1109/ASEW52652.2021.00028},
booktitle={2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW)},
publisher={IEEE},
author={Tavares de Sousa, Nelson and Hasselbring, Wilhelm},
year={2021},
month=nov,
pages={90β95} }
```
**APA:**
More information needed.
# Glossary [optional]
More information needed.
# More Information [optional]
More information needed.
# Model Card Authors [optional]
Christian-Albrechts-University of Kiel (CAUKiel) in collaboration with Ezi Ozoani and the team at Hugging Face
# Model Card Contact
More information needed.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import pipeline
pipe = pipeline('fill-mask', model='CAUKiel/JavaBERT')
output = pipe(CODE) # Replace with Java code; Use '[MASK]' to mask tokens/words in the code.
```
</details>
|
PrunaAI/RLHFlow-pair-preference-model-LLaMA3-8B-AWQ-4bit-smashed | PrunaAI | 2024-07-17T13:12:29Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:RLHFlow/pair-preference-model-LLaMA3-8B",
"base_model:quantized:RLHFlow/pair-preference-model-LLaMA3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2024-07-17T13:09:47Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: RLHFlow/pair-preference-model-LLaMA3-8B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo RLHFlow/pair-preference-model-LLaMA3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/RLHFlow-pair-preference-model-LLaMA3-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("RLHFlow/pair-preference-model-LLaMA3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model RLHFlow/pair-preference-model-LLaMA3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
quim-motger/reviewRoBERTa-large | quim-motger | 2024-07-17T13:10:05Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-07-17T12:55:25Z | ---
license: gpl-3.0
---
# reviewRoBERTa-large
This model is a fine-tuned version of [`roberta-large`](https://huggingface.co/FacebookAI/roberta-large) on a large dataset
of mobile app reviews. The model is designed to understand and process text from mobile app reviews, providing enhanced performance
for tasks such as feature extraction, sentiment analysis and review summarization from app reviews.
## Model Details
- **Model Architecture**: RoBERTa (Robustly Optimized BERT Approach)
- **Base Model**: `roberta-large`
- **Pre-training Extension**: Mobile app reviews dataset
- **Language**: English
## Dataset
The extended pre-training was performed using a diverse dataset of mobile app reviews collected from various app stores.
The dataset includes reviews of different lengths, sentiments, and topics, providing a robust foundation for understanding
the nuances of mobile app user feedback.
## Training Procedure
The model was fine-tuned using the following parameters:
- **Batch Size**: 8
- **Learning Rate**: 2e-5
- **Epochs**: 2
## Usage
### Load the model
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
tokenizer = RobertaTokenizer.from_pretrained('quim-motger/reviewRoBERTa-large')
model = RobertaForSequenceClassification.from_pretrained('quim-motger/reviewRoBERTa-large')
```
### Example: Sentiment Analysis
```python
from transformers import pipeline
nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
review = "This app is fantastic! I love the user-friendly interface and features."
result = nlp(review)
print(result)
# Output: [{'label': 'POSITIVE', 'score': 0.98}]
```
### Example: Review Summarization
```python
from transformers import pipeline
summarizer = pipeline('summarization', model=model, tokenizer=tokenizer)
long_review = "I have been using this app for a while and it has significantly improved my productivity.
The range of features is excellent, and the user interface is intuitive. However, there are occasional
bugs that need fixing."
summary = summarizer(long_review, max_length=50, min_length=25, do_sample=False)
print(summary)
# Output: [{'summary_text': 'The app has significantly improved my productivity with its excellent features and intuitive user interface. However, occasional bugs need fixing.'}]
```
|
PrunaAI/RLHFlow-pair-preference-model-LLaMA3-8B-HQQ-1bit-smashed | PrunaAI | 2024-07-17T13:07:37Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:RLHFlow/pair-preference-model-LLaMA3-8B",
"base_model:finetune:RLHFlow/pair-preference-model-LLaMA3-8B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T13:06:04Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: RLHFlow/pair-preference-model-LLaMA3-8B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo RLHFlow/pair-preference-model-LLaMA3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/RLHFlow-pair-preference-model-LLaMA3-8B-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/RLHFlow-pair-preference-model-LLaMA3-8B-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("RLHFlow/pair-preference-model-LLaMA3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model RLHFlow/pair-preference-model-LLaMA3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
tsavage68/Summary4500_L3_1000steps_1e7rate_03beta_CSFTDPO | tsavage68 | 2024-07-17T13:06:08Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary4500_L3_100steps_1e6rate_SFT",
"base_model:finetune:tsavage68/Summary4500_L3_100steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T12:59:57Z | ---
license: llama3
base_model: tsavage68/Summary4500_L3_100steps_1e6rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Hyponatremia_L3_1000steps_1e7rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hyponatremia_L3_1000steps_1e7rate_03beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary4500_L3_100steps_1e6rate_SFT](https://huggingface.co/tsavage68/Summary4500_L3_100steps_1e6rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0025
- Rewards/chosen: 0.4191
- Rewards/rejected: -7.9725
- Rewards/accuracies: 0.9980
- Rewards/margins: 8.3916
- Logps/rejected: -159.7724
- Logps/chosen: -82.7927
- Logits/rejected: -1.1012
- Logits/chosen: -1.0642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6455 | 0.0112 | 50 | 0.6189 | 0.0168 | -0.1483 | 0.7940 | 0.1651 | -133.6916 | -84.1338 | -1.0991 | -1.0690 |
| 0.1915 | 0.0224 | 100 | 0.2381 | 0.0998 | -1.2728 | 0.9980 | 1.3727 | -137.4402 | -83.8570 | -1.1007 | -1.0693 |
| 0.0069 | 0.0336 | 150 | 0.0340 | 0.2244 | -3.5445 | 0.9980 | 3.7690 | -145.0125 | -83.4417 | -1.1014 | -1.0678 |
| 0.0017 | 0.0448 | 200 | 0.0098 | 0.2714 | -5.2540 | 0.9980 | 5.5254 | -150.7106 | -83.2852 | -1.1013 | -1.0664 |
| 0.0014 | 0.0559 | 250 | 0.0058 | 0.3321 | -6.1233 | 0.9980 | 6.4554 | -153.6084 | -83.0827 | -1.1013 | -1.0655 |
| 0.0001 | 0.0671 | 300 | 0.0044 | 0.3409 | -6.6530 | 0.9980 | 6.9939 | -155.3742 | -83.0536 | -1.1000 | -1.0641 |
| 0.0005 | 0.0783 | 350 | 0.0037 | 0.3524 | -7.0398 | 0.9980 | 7.3922 | -156.6634 | -83.0152 | -1.1004 | -1.0643 |
| 0.0001 | 0.0895 | 400 | 0.0031 | 0.3703 | -7.3960 | 0.9980 | 7.7663 | -157.8508 | -82.9556 | -1.1006 | -1.0643 |
| 0.0 | 0.1007 | 450 | 0.0029 | 0.4041 | -7.5392 | 0.9980 | 7.9433 | -158.3280 | -82.8429 | -1.1006 | -1.0640 |
| 0.0 | 0.1119 | 500 | 0.0028 | 0.3938 | -7.6566 | 0.9980 | 8.0503 | -158.7193 | -82.8773 | -1.1011 | -1.0644 |
| 0.0 | 0.1231 | 550 | 0.0027 | 0.3960 | -7.7988 | 0.9980 | 8.1949 | -159.1935 | -82.8697 | -1.1004 | -1.0635 |
| 0.0001 | 0.1343 | 600 | 0.0026 | 0.4050 | -7.8907 | 0.9980 | 8.2958 | -159.4998 | -82.8397 | -1.1008 | -1.0638 |
| 0.0 | 0.1454 | 650 | 0.0025 | 0.4102 | -7.9529 | 0.9980 | 8.3630 | -159.7068 | -82.8226 | -1.1006 | -1.0637 |
| 0.0 | 0.1566 | 700 | 0.0025 | 0.4105 | -7.9650 | 0.9980 | 8.3755 | -159.7473 | -82.8215 | -1.1011 | -1.0642 |
| 0.0037 | 0.1678 | 750 | 0.0025 | 0.4133 | -7.9730 | 0.9980 | 8.3863 | -159.7740 | -82.8120 | -1.1009 | -1.0641 |
| 0.0 | 0.1790 | 800 | 0.0025 | 0.4059 | -7.9812 | 0.9980 | 8.3871 | -159.8014 | -82.8367 | -1.1012 | -1.0644 |
| 0.0004 | 0.1902 | 850 | 0.0025 | 0.4003 | -7.9906 | 0.9980 | 8.3909 | -159.8326 | -82.8553 | -1.1015 | -1.0645 |
| 0.0 | 0.2014 | 900 | 0.0025 | 0.4050 | -7.9764 | 0.9980 | 8.3814 | -159.7853 | -82.8397 | -1.1014 | -1.0645 |
| 0.0 | 0.2126 | 950 | 0.0025 | 0.4187 | -7.9726 | 0.9980 | 8.3913 | -159.7726 | -82.7940 | -1.1012 | -1.0642 |
| 0.0 | 0.2238 | 1000 | 0.0025 | 0.4191 | -7.9725 | 0.9980 | 8.3916 | -159.7724 | -82.7927 | -1.1012 | -1.0642 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
PrunaAI/RLHFlow-pair-preference-model-LLaMA3-8B-bnb-4bit-smashed | PrunaAI | 2024-07-17T13:00:27Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:RLHFlow/pair-preference-model-LLaMA3-8B",
"base_model:quantized:RLHFlow/pair-preference-model-LLaMA3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T12:57:46Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: RLHFlow/pair-preference-model-LLaMA3-8B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo RLHFlow/pair-preference-model-LLaMA3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/RLHFlow-pair-preference-model-LLaMA3-8B-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("RLHFlow/pair-preference-model-LLaMA3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model RLHFlow/pair-preference-model-LLaMA3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
crapthings/beans | crapthings | 2024-07-17T12:58:02Z | 195 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-07-17T12:50:20Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: beans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0677
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2825 | 1.0 | 130 | 0.2142 | 0.9624 |
| 0.1331 | 2.0 | 260 | 0.1347 | 0.9699 |
| 0.1427 | 3.0 | 390 | 0.0999 | 0.9774 |
| 0.0808 | 4.0 | 520 | 0.0677 | 0.9925 |
| 0.1156 | 5.0 | 650 | 0.0839 | 0.9774 |
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
nvhf/coedit-xl-composite-Q6_K-GGUF | nvhf | 2024-07-17T12:52:47Z | 5 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:asset",
"dataset:wi_locness",
"dataset:GEM/wiki_auto_asset_turk",
"dataset:discofuse",
"dataset:zaemyung/IteraTeR_plus",
"dataset:jfleg",
"dataset:grammarly/coedit",
"base_model:grammarly/coedit-xl-composite",
"base_model:quantized:grammarly/coedit-xl-composite",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-07-17T12:52:37Z | ---
base_model: grammarly/coedit-xl-composite
datasets:
- asset
- wi_locness
- GEM/wiki_auto_asset_turk
- discofuse
- zaemyung/IteraTeR_plus
- jfleg
- grammarly/coedit
language:
- en
license: cc-by-nc-4.0
metrics:
- sari
- bleu
- accuracy
tags:
- llama-cpp
- gguf-my-repo
---
# nvhf/coedit-xl-composite-Q6_K-GGUF
This model was converted to GGUF format from [`grammarly/coedit-xl-composite`](https://huggingface.co/grammarly/coedit-xl-composite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/grammarly/coedit-xl-composite) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nvhf/coedit-xl-composite-Q6_K-GGUF --hf-file coedit-xl-composite-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nvhf/coedit-xl-composite-Q6_K-GGUF --hf-file coedit-xl-composite-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nvhf/coedit-xl-composite-Q6_K-GGUF --hf-file coedit-xl-composite-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nvhf/coedit-xl-composite-Q6_K-GGUF --hf-file coedit-xl-composite-q6_k.gguf -c 2048
```
|
emplitude/rubywork | emplitude | 2024-07-17T12:51:46Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2309.00071",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T12:45:16Z | ---
license: apache-2.0
---
**Model Name: Qwen2 orca_mini_v7_7b**
# Qwen2 orca_mini_v7_7b is trained with various SFT Datasets
<img src="https://huggingface.co/pankajmathur/orca_mini_v5_8b/resolve/main/orca_minis_small.jpeg" width="auto" />
<strong>
Passionate about Generative AI? I help companies to privately train and deploy custom LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!
<a href="https://www.linkedin.com/in/pankajam" target="_blank">https://www.linkedin.com/in/pankajam</a> Looking forward to connecting!
</strong>
<br>
### NOTICE
By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges.
I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model.
Dive in and innovate!
### Evaluation
Coming Soon..
### Example Usage
Here is the ChatML prompt format
```
<|im_start|>system
You are Orca Mini, a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello Orca Mini, what can you do for me?<|im_end|>
<|im_start|>assistant
```
Below shows a code example on how to use this model
```python
from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v7_7b"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
{"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
{"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
**Quants**
GGUF : Coming Soon
AWQ: Coming Soon
### Processing Long Texts (Based upon Qwen2-7B-Instruct suggestions at https://huggingface.co/Qwen/Qwen2-7B-Instruct)
To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
1. **Install vLLM**: You can install vLLM by running the following command.
```bash
pip install "vllm>=0.4.3"
```
Or you can install vLLM from [source](https://github.com/vllm-project/vllm/).
2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
```json
{
"architectures": [
"Qwen2ForCausalLM"
],
// ...
"vocab_size": 152064,
// adding the following snippets
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
This snippet enable YARN to support longer contexts.
3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
```bash
python -u -m vllm.entrypoints.openai.api_server --model pankajmathur/orca_mini_v7_7b
```
Then you can access the Chat API by:
```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "pankajmathur/orca_mini_v7_7b",
"messages": [
{"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
{"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
}'
```
**Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
|
mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF | mradermacher | 2024-07-17T12:48:50Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"groq",
"tool-use",
"function-calling",
"en",
"base_model:Groq/Llama-3-Groq-8B-Tool-Use",
"base_model:quantized:Groq/Llama-3-Groq-8B-Tool-Use",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-07-17T09:02:31Z | ---
base_model: Groq/Llama-3-Groq-8B-Tool-Use
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- groq
- tool-use
- function-calling
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
limaatulya/my_awesome_billsum_model_2 | limaatulya | 2024-07-17T12:41:34Z | 125 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-06-17T10:38:47Z | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: my_awesome_billsum_model_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model_2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
choiruzzia/best_berita_bert_model_fold_5 | choiruzzia | 2024-07-17T12:38:21Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa",
"base_model:finetune:ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T12:37:54Z | ---
license: mit
base_model: ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: best_berita_bert_model_fold_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>]()
# best_berita_bert_model_fold_5
This model is a fine-tuned version of [ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa](https://huggingface.co/ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0859
- Accuracy: 0.9833
- Precision: 0.9834
- Recall: 0.9830
- F1: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5542 | 1.0 | 601 | 0.3531 | 0.9142 | 0.9204 | 0.9129 | 0.9108 |
| 0.266 | 2.0 | 1202 | 0.1554 | 0.9625 | 0.9634 | 0.9620 | 0.9618 |
| 0.1215 | 3.0 | 1803 | 0.0859 | 0.9833 | 0.9834 | 0.9830 | 0.9832 |
| 0.0721 | 4.0 | 2404 | 0.1634 | 0.9725 | 0.9736 | 0.9721 | 0.9720 |
| 0.0227 | 5.0 | 3005 | 0.4132 | 0.9484 | 0.9527 | 0.9475 | 0.9470 |
| 0.0242 | 6.0 | 3606 | 0.2816 | 0.9609 | 0.9632 | 0.9602 | 0.9599 |
| 0.0083 | 7.0 | 4207 | 0.2295 | 0.9717 | 0.9731 | 0.9712 | 0.9712 |
| 0.0 | 8.0 | 4808 | 0.1644 | 0.9792 | 0.9800 | 0.9788 | 0.9789 |
| 0.0002 | 9.0 | 5409 | 0.1868 | 0.9784 | 0.9792 | 0.9780 | 0.9781 |
| 0.0 | 10.0 | 6010 | 0.1901 | 0.9784 | 0.9792 | 0.9780 | 0.9781 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
m1b/act_koch_pick_red_lego_distributed | m1b | 2024-07-17T12:34:48Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
]
| null | 2024-07-17T12:34:47Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
PrunaAI/Replete-AI-Llama-3-11.5B-Instruct-V2-HQQ-2bit-smashed | PrunaAI | 2024-07-17T12:31:49Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T12:29:30Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Replete-AI/Llama-3-11.5B-Instruct-V2
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Replete-AI/Llama-3-11.5B-Instruct-V2 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/Replete-AI-Llama-3-11.5B-Instruct-V2-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/Replete-AI-Llama-3-11.5B-Instruct-V2-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("Replete-AI/Llama-3-11.5B-Instruct-V2")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Replete-AI/Llama-3-11.5B-Instruct-V2 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
nvhf/coedit-large-Q6_K-GGUF | nvhf | 2024-07-17T12:26:49Z | 28 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:facebook/asset",
"dataset:wi_locness",
"dataset:GEM/wiki_auto_asset_turk",
"dataset:discofuse",
"dataset:zaemyung/IteraTeR_plus",
"dataset:jfleg",
"dataset:grammarly/coedit",
"base_model:grammarly/coedit-large",
"base_model:quantized:grammarly/coedit-large",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-07-17T12:26:45Z | ---
base_model: grammarly/coedit-large
datasets:
- facebook/asset
- wi_locness
- GEM/wiki_auto_asset_turk
- discofuse
- zaemyung/IteraTeR_plus
- jfleg
- grammarly/coedit
language:
- en
license: cc-by-nc-4.0
metrics:
- sari
- bleu
- accuracy
tags:
- llama-cpp
- gguf-my-repo
widget:
- text: 'Fix the grammar: When I grow up, I start to understand what he said is quite
right.'
example_title: Fluency
- text: 'Make this text coherent: Their flight is weak. They run quickly through the
tree canopy.'
example_title: Coherence
- text: 'Rewrite to make this easier to understand: A storm surge is what forecasters
consider a hurricane''s most treacherous aspect.'
example_title: Simplification
- text: 'Paraphrase this: Do you know where I was born?'
example_title: Paraphrase
- text: 'Write this more formally: omg i love that song im listening to it right now'
example_title: Formalize
- text: 'Write in a more neutral way: The authors'' exposΓ© on nutrition studies.'
example_title: Neutralize
---
# nvhf/coedit-large-Q6_K-GGUF
This model was converted to GGUF format from [`grammarly/coedit-large`](https://huggingface.co/grammarly/coedit-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/grammarly/coedit-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nvhf/coedit-large-Q6_K-GGUF --hf-file coedit-large-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nvhf/coedit-large-Q6_K-GGUF --hf-file coedit-large-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nvhf/coedit-large-Q6_K-GGUF --hf-file coedit-large-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nvhf/coedit-large-Q6_K-GGUF --hf-file coedit-large-q6_k.gguf -c 2048
```
|
m3face/FaceControlNet | m3face | 2024-07-17T12:26:20Z | 5 | 0 | null | [
"region:us"
]
| null | 2024-07-17T09:44:52Z | ---
{}
---
# Model Card for Face ControlNet Models
You can find the multilingual and English controlnet model checkpoints in the corresponding revisions:
- **Multilingual**: [`segmentation-mlin`](https://huggingface.co/m3face/ControlnetModels/tree/segmentation-mlin) and [`landmark-mlin`](https://huggingface.co/m3face/ControlnetModels/tree/landmark-mlin).
- **English**: [`segmentation-english`](https://huggingface.co/m3face/ControlnetModels/tree/segmentation-english) and [`landmark-english`](https://huggingface.co/m3face/ControlnetModels/tree/landmark-english). |
choiruzzia/best_berita_bert_model_fold_3 | choiruzzia | 2024-07-17T12:15:07Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa",
"base_model:finetune:ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T12:14:45Z | ---
license: mit
base_model: ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: best_berita_bert_model_fold_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# best_berita_bert_model_fold_3
This model is a fine-tuned version of [ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa](https://huggingface.co/ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1732
- Accuracy: 0.9808
- Precision: 0.9809
- Recall: 0.9811
- F1: 0.9809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5469 | 1.0 | 601 | 0.2814 | 0.9409 | 0.9416 | 0.9414 | 0.9410 |
| 0.21 | 2.0 | 1202 | 0.1697 | 0.9600 | 0.9602 | 0.9605 | 0.9600 |
| 0.1002 | 3.0 | 1803 | 0.2227 | 0.9667 | 0.9674 | 0.9673 | 0.9666 |
| 0.0847 | 4.0 | 2404 | 0.2771 | 0.9584 | 0.9599 | 0.9592 | 0.9581 |
| 0.029 | 5.0 | 3005 | 0.1732 | 0.9808 | 0.9809 | 0.9811 | 0.9809 |
| 0.0095 | 6.0 | 3606 | 0.2415 | 0.9734 | 0.9737 | 0.9738 | 0.9733 |
| 0.0134 | 7.0 | 4207 | 0.2048 | 0.9767 | 0.9769 | 0.9771 | 0.9766 |
| 0.0001 | 8.0 | 4808 | 0.2916 | 0.9692 | 0.9697 | 0.9698 | 0.9691 |
| 0.0039 | 9.0 | 5409 | 0.2201 | 0.9784 | 0.9786 | 0.9787 | 0.9784 |
| 0.0 | 10.0 | 6010 | 0.2293 | 0.9742 | 0.9745 | 0.9746 | 0.9742 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
hsan512/vistral_lora_merged_ver2 | hsan512 | 2024-07-17T12:13:02Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T11:57:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Cnam-LMSSC/vibravox_phonemizers | Cnam-LMSSC | 2024-07-17T12:11:50Z | 0 | 4 | null | [
"fr",
"dataset:Cnam-LMSSC/vibravox",
"arxiv:2407.11828",
"license:mit",
"region:us"
]
| null | 2024-05-30T22:17:41Z | ---
license: mit
datasets:
- Cnam-LMSSC/vibravox
language:
- fr
---
# Master Model Card: Vibravox Speech-to-Phonemes Models
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65302a613ecbe51d6a6ddcec/zhB1fh-c0pjlj-Tr4Vpmr.png" style="object-fit:contain; width:280px; height:280px;" >
</p>
## Overview
This master model card serves as an entry point for exploring [multiple speech-to-phoneme models](https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers#available-models) trained on different sensor data from the [Vibravox dataset](https://huggingface.co/datasets/Cnam-LMSSC/vibravox).
These models are designed to convert French speech into sequences of International Phonetic Alphabet (IPA) encoded words, and are fine-tuned on specific sensors to address various audio capture scenarios using **body conducted** sound and vibration sensors.
## Disclaimer
Each of these models has been trained for **specific non-conventional speech sensors** and is intended to be used with **in-domain data**. The only exception is the headset microphone phonemizer, which can certainly be used for many applications using audio data captured by airborne microphones.
Please be advised that using these models outside their intended sensor data may result in suboptimal performance.
## Task Description
The primary task for these models is an ASR task in the speech-to-phoneme context. Each model takes audio input and outputs a sequence of phonemes encoded in the IPA, facilitating precise phonetic transcription of French speech. Users unfamiliar with the phonetic alphabet can use tools like the [IPA reader](http://ipa-reader.xyz) to convert the transcript back to synthetic speech and evaluate the transcription quality.
## Usage
All models are finetuned versions of [facebook/wav2vec2-base-fr-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fr-voxpopuli-v2) and adapted to different sensor inputs. They are intended to be used at a sample rate of 16kHz.
## Training Procedure
The models were each finetuned for 10 epochs with a constant learning rate of 1e-5. Detailed instructions for reproducing the experiments are available on the [jhauret/vibravox](https://github.com/jhauret/vibravox) Github repository and in the [VibraVox paper on arXiV](https://arxiv.org/abs/2407.11828).
## Available Models
The following models are available, **each trained on a different sensor** on the `speech_clean` subset of (https://huggingface.co/datasets/Cnam-LMSSC/vibravox):
| **Transducer** | **Huggingface model link** |
|:---------------------------|:---------------------|
| Reference headset microphone | [phonemizer_headset_microphone](https://huggingface.co/Cnam-LMSSC/phonemizer_headset_microphone) |
| In-ear comply foam-embedded microphone |[phonemizer_soft_in_ear_microphone](https://huggingface.co/Cnam-LMSSC/phonemizer_soft_in_ear_microphone) |
| In-ear rigid earpiece-embedded microphone | [phonemizer_rigid_in_ear_microphone](https://huggingface.co/Cnam-LMSSC/phonemizer_rigid_in_ear_microphone) |
| Forehead miniature vibration sensor | [phonemizer_forehead_accelerometer](https://huggingface.co/Cnam-LMSSC/phonemizer_forehead_accelerometer) |
| Temple vibration pickup | [phonemizer_temple_vibration_pickup](https://huggingface.co/Cnam-LMSSC/phonemizer_temple_vibration_pickup) |
| Laryngophone | [phonemizer_throat_microphone](https://huggingface.co/Cnam-LMSSC/phonemizer_throat_microphone) | |
Cnam-LMSSC/phonemizer_headset_microphone | Cnam-LMSSC | 2024-07-17T12:10:49Z | 332 | 4 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"phonemize",
"phoneme",
"fr",
"dataset:Cnam-LMSSC/vibravox",
"arxiv:2407.11828",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-06-17T09:01:16Z | ---
library_name: transformers
license: mit
language: fr
datasets:
- Cnam-LMSSC/vibravox
metrics:
- per
tags:
- audio
- automatic-speech-recognition
- speech
- phonemize
- phoneme
model-index:
- name: Wav2Vec2-base French finetuned for Speech-to-Phoneme by LMSSC
results:
- task:
name: Speech-to-Phoneme
type: automatic-speech-recognition
dataset:
name: Vibravox["headset_microphone"]
type: Cnam-LMSSC/vibravox
args: fr
metrics:
- name: Test PER, in-domain training |
type: per
value: 2.8
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65302a613ecbe51d6a6ddcec/zhB1fh-c0pjlj-Tr4Vpmr.png" style="object-fit:contain; width:280px; height:280px;" >
</p>
# Model Card
- **Developed by:** [Cnam-LMSSC](https://huggingface.co/Cnam-LMSSC)
- **Model type:** [Wav2Vec2ForCTC](https://huggingface.co/transformers/v4.9.2/model_doc/wav2vec2.html#transformers.Wav2Vec2ForCTC)
- **Language:** French
- **License:** MIT
- **Finetuned from model:** [facebook/wav2vec2-base-fr-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fr-voxpopuli-v2)
- **Finetuned dataset:** `headset_microphone` audio of the `speech_clean` subset of [Cnam-LMSSC/vibravox](https://huggingface.co/datasets/Cnam-LMSSC/vibravox) (see [VibraVox paper on arXiV](https://arxiv.org/abs/2407.11828))
- **Samplerate for usage:** 16kHz
## Output
As this model is specifically trained for a speech-to-phoneme task, the output is sequence of [IPA-encoded](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) words, without punctuation.
If you don't read the phonetic alphabet fluently, you can use this excellent [IPA reader website](http://ipa-reader.xyz) to convert the transcript back to audio synthetic speech in order to check the quality of the phonetic transcription.
## Link to phonemizer models trained on other body conducted sensors :
An entry point to all **phonemizers** models trained on different sensor data from the [Vibravox dataset](https://huggingface.co/datasets/Cnam-LMSSC/vibravox) is available at [https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers](https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers).
### Disclaimer
Each of these models has been trained for a **specific non-conventional speech sensor** and is intended to be used with **in-domain data**. The only exception is the headset microphone phonemizer, which can certainly be used for many applications using audio data captured by airborne microphones.
Please be advised that using these models outside their intended sensor data may result in suboptimal performance.
## Training procedure
The model has been finetuned for 10 epochs with a constant learning rate of *1e-5*. To reproduce experiment please visit [jhauret/vibravox](https://github.com/jhauret/vibravox).
## Inference script :
```python
import torch, torchaudio
from transformers import AutoProcessor, AutoModelForCTC
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("Cnam-LMSSC/phonemizer_headset_microphone")
model = AutoModelForCTC.from_pretrained("Cnam-LMSSC/phonemizer_headset_microphone")
test_dataset = load_dataset("Cnam-LMSSC/vibravox", "speech_clean", split="test", streaming=True)
audio_48kHz = torch.Tensor(next(iter(test_dataset))["audio.headset_microphone"]["array"])
audio_16kHz = torchaudio.functional.resample(audio_48kHz, orig_freq=48_000, new_freq=16_000)
inputs = processor(audio_16kHz, sampling_rate=16_000, return_tensors="pt")
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits,dim = -1)
transcription = processor.batch_decode(predicted_ids)
print("Phonetic transcription : ", transcription)
```
|
Cnam-LMSSC/phonemizer_temple_vibration_pickup | Cnam-LMSSC | 2024-07-17T12:08:22Z | 123 | 2 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"phonemize",
"phoneme",
"fr",
"dataset:Cnam-LMSSC/vibravox",
"arxiv:2407.11828",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-06-17T09:04:00Z | ---
library_name: transformers
license: mit
language: fr
datasets:
- Cnam-LMSSC/vibravox
metrics:
- per
tags:
- audio
- automatic-speech-recognition
- speech
- phonemize
- phoneme
model-index:
- name: Wav2Vec2-base French finetuned for Speech-to-Phoneme by LMSSC
results:
- task:
name: Speech-to-Phoneme
type: automatic-speech-recognition
dataset:
name: Vibravox["temple_vibration_pickup"]
type: Cnam-LMSSC/vibravox
args: fr
metrics:
- name: Test PER, in-domain training |
type: per
value: 14.2
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65302a613ecbe51d6a6ddcec/zhB1fh-c0pjlj-Tr4Vpmr.png" style="object-fit:contain; width:280px; height:280px;" >
</p>
# Model Card
- **Developed by:** [Cnam-LMSSC](https://huggingface.co/Cnam-LMSSC)
- **Model type:** [Wav2Vec2ForCTC](https://huggingface.co/transformers/v4.9.2/model_doc/wav2vec2.html#transformers.Wav2Vec2ForCTC)
- **Language:** French
- **License:** MIT
- **Finetuned from model:** [facebook/wav2vec2-base-fr-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fr-voxpopuli-v2)
- **Finetuned dataset:** `temple_vibration_pickup` audio of the `speech_clean` subset of [Cnam-LMSSC/vibravox](https://huggingface.co/datasets/Cnam-LMSSC/vibravox) (see [VibraVox paper on arXiV](https://arxiv.org/abs/2407.11828))
- **Samplerate for usage:** 16kHz
## Output
As this model is specifically trained for a speech-to-phoneme task, the output is sequence of [IPA-encoded](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) words, without punctuation.
If you don't read the phonetic alphabet fluently, you can use this excellent [IPA reader website](http://ipa-reader.xyz) to convert the transcript back to audio synthetic speech in order to check the quality of the phonetic transcription.
## Link to phonemizer models trained on other body conducted sensors :
An entry point to all **phonemizers** models trained on different sensor data from the [Vibravox dataset](https://huggingface.co/datasets/Cnam-LMSSC/vibravox) is available at [https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers](https://huggingface.co/Cnam-LMSSC/vibravox_phonemizers).
### Disclaimer
Each of these models has been trained for a **specific non-conventional speech sensor** and is intended to be used with **in-domain data**. The only exception is the headset microphone phonemizer, which can certainly be used for many applications using audio data captured by airborne microphones.
Please be advised that using these models outside their intended sensor data may result in suboptimal performance.
## Training procedure
The model has been finetuned for 10 epochs with a constant learning rate of *1e-5*. To reproduce experiment please visit [jhauret/vibravox](https://github.com/jhauret/vibravox).
## Inference script :
```python
import torch, torchaudio
from transformers import AutoProcessor, AutoModelForCTC
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("Cnam-LMSSC/phonemizer_temple_vibration_pickup")
model = AutoModelForCTC.from_pretrained("Cnam-LMSSC/phonemizer_temple_vibration_pickup")
test_dataset = load_dataset("Cnam-LMSSC/vibravox", "speech_clean", split="test", streaming=True)
audio_48kHz = torch.Tensor(next(iter(test_dataset))["audio.temple_vibration_pickup"]["array"])
audio_16kHz = torchaudio.functional.resample(audio_48kHz, orig_freq=48_000, new_freq=16_000)
inputs = processor(audio_16kHz, sampling_rate=16_000, return_tensors="pt")
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits,dim = -1)
transcription = processor.batch_decode(predicted_ids)
print("Phonetic transcription : ", transcription)
```
|
PrunaAI/Vikhrmodels-it-5.2-fp16-cp-bnb-8bit-smashed | PrunaAI | 2024-07-17T12:01:10Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:Vikhrmodels/it-5.2-fp16-cp",
"base_model:quantized:Vikhrmodels/it-5.2-fp16-cp",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T11:57:31Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Vikhrmodels/it-5.2-fp16-cp
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Vikhrmodels/it-5.2-fp16-cp installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Vikhrmodels-it-5.2-fp16-cp-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Vikhrmodels/it-5.2-fp16-cp")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Vikhrmodels/it-5.2-fp16-cp before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
PrunaAI/Vikhrmodels-it-5.2-fp16-cp-bnb-4bit-smashed | PrunaAI | 2024-07-17T11:59:48Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:Vikhrmodels/it-5.2-fp16-cp",
"base_model:quantized:Vikhrmodels/it-5.2-fp16-cp",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T11:57:29Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Vikhrmodels/it-5.2-fp16-cp
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Vikhrmodels/it-5.2-fp16-cp installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Vikhrmodels-it-5.2-fp16-cp-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Vikhrmodels/it-5.2-fp16-cp")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Vikhrmodels/it-5.2-fp16-cp before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
jonathanjordan21/mos-mamba-18x130m-trainer-dgx | jonathanjordan21 | 2024-07-17T11:54:51Z | 135 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"MoSMamba",
"text-generation",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:jonathanjordan21/mos-mamba-6x130m-hf",
"base_model:finetune:jonathanjordan21/mos-mamba-6x130m-hf",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-07-17T10:40:17Z | ---
base_model: jonathanjordan21/mos-mamba-6x130m-hf
tags:
- generated_from_trainer
model-index:
- name: mos-mamba-18x130m-trainer-dgx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>]()
# mos-mamba-18x130m-trainer-dgx
This model is a fine-tuned version of [jonathanjordan21/mos-mamba-6x130m-hf](https://huggingface.co/jonathanjordan21/mos-mamba-6x130m-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 6
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
PrunaAI/HODACHI-EZO-Common-9B-gemma-2-it-HQQ-4bit-smashed | PrunaAI | 2024-07-17T11:53:10Z | 4 | 0 | transformers | [
"transformers",
"gemma2",
"text-generation",
"pruna-ai",
"conversational",
"base_model:AXCXEPT/EZO-Common-9B-gemma-2-it",
"base_model:finetune:AXCXEPT/EZO-Common-9B-gemma-2-it",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T11:50:16Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: HODACHI/EZO-Common-9B-gemma-2-it
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo HODACHI/EZO-Common-9B-gemma-2-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/HODACHI-EZO-Common-9B-gemma-2-it-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/HODACHI-EZO-Common-9B-gemma-2-it-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("HODACHI/EZO-Common-9B-gemma-2-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model HODACHI/EZO-Common-9B-gemma-2-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
twright8/gemma-9b-4bit-vllm | twright8 | 2024-07-17T11:51:54Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2-9b-it-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T11:45:45Z | ---
base_model: unsloth/gemma-2-9b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
---
# Uploaded model
- **Developed by:** twright8
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/Weyaxi-Einstein-v7-Qwen2-7B-AWQ-4bit-smashed | PrunaAI | 2024-07-17T11:46:38Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pruna-ai",
"conversational",
"base_model:Weyaxi/Einstein-v7-Qwen2-7B",
"base_model:quantized:Weyaxi/Einstein-v7-Qwen2-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
]
| text-generation | 2024-07-17T11:44:07Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Weyaxi/Einstein-v7-Qwen2-7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Weyaxi/Einstein-v7-Qwen2-7B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/Weyaxi-Einstein-v7-Qwen2-7B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Weyaxi/Einstein-v7-Qwen2-7B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Weyaxi/Einstein-v7-Qwen2-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
BrundaL/struc_pruned-bert-mrpc | BrundaL | 2024-07-17T11:44:14Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T11:44:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BrundaL/bert_mrpc_trained | BrundaL | 2024-07-17T11:44:02Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T11:43:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/Weyaxi-Einstein-v7-Qwen2-7B-HQQ-1bit-smashed | PrunaAI | 2024-07-17T11:31:55Z | 6 | 0 | transformers | [
"transformers",
"qwen2",
"text-generation",
"pruna-ai",
"conversational",
"base_model:Weyaxi/Einstein-v7-Qwen2-7B",
"base_model:finetune:Weyaxi/Einstein-v7-Qwen2-7B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T11:30:17Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Weyaxi/Einstein-v7-Qwen2-7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Weyaxi/Einstein-v7-Qwen2-7B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/Weyaxi-Einstein-v7-Qwen2-7B-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/Weyaxi-Einstein-v7-Qwen2-7B-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("Weyaxi/Einstein-v7-Qwen2-7B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Weyaxi/Einstein-v7-Qwen2-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
PrunaAI/Weyaxi-Einstein-v7-Qwen2-7B-bnb-4bit-smashed | PrunaAI | 2024-07-17T11:30:11Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pruna-ai",
"conversational",
"base_model:Weyaxi/Einstein-v7-Qwen2-7B",
"base_model:quantized:Weyaxi/Einstein-v7-Qwen2-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T11:27:28Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Weyaxi/Einstein-v7-Qwen2-7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Weyaxi/Einstein-v7-Qwen2-7B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Weyaxi-Einstein-v7-Qwen2-7B-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Weyaxi/Einstein-v7-Qwen2-7B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Weyaxi/Einstein-v7-Qwen2-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
mradermacher/llama3_Loradent-GGUF | mradermacher | 2024-07-17T11:30:00Z | 24 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ckyip/llama3_Loradent",
"base_model:quantized:ckyip/llama3_Loradent",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-07-17T11:03:35Z | ---
base_model: ckyip/llama3_Loradent
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ckyip/llama3_Loradent
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama3_Loradent-GGUF/resolve/main/llama3_Loradent.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Autsadin/SeaLLM-7B-v2.5_16bit | Autsadin | 2024-07-17T11:29:45Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-16T10:16:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tsavage68/Summary4500_L3_1000steps_1e8rate_01beta_CSFTDPO | tsavage68 | 2024-07-17T11:22:02Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary4500_L3_100steps_1e6rate_SFT",
"base_model:finetune:tsavage68/Summary4500_L3_100steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T11:15:42Z | ---
license: llama3
base_model: tsavage68/Summary4500_L3_100steps_1e6rate_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Hyponatremia_L3_1000steps_1e8rate_01beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hyponatremia_L3_1000steps_1e8rate_01beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary4500_L3_100steps_1e6rate_SFT](https://huggingface.co/tsavage68/Summary4500_L3_100steps_1e6rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6901
- Rewards/chosen: 0.0028
- Rewards/rejected: -0.0045
- Rewards/accuracies: 0.5440
- Rewards/margins: 0.0073
- Logps/rejected: -133.2426
- Logps/chosen: -84.1618
- Logits/rejected: -1.0994
- Logits/chosen: -1.0693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7202 | 0.0112 | 50 | 0.6920 | 0.0038 | 0.0005 | 0.5100 | 0.0034 | -133.1928 | -84.1516 | -1.0987 | -1.0683 |
| 0.6983 | 0.0224 | 100 | 0.6938 | 0.0029 | 0.0030 | 0.4940 | -0.0001 | -133.1671 | -84.1607 | -1.0980 | -1.0678 |
| 0.6799 | 0.0336 | 150 | 0.6932 | 0.0057 | 0.0046 | 0.5060 | 0.0010 | -133.1511 | -84.1332 | -1.0980 | -1.0676 |
| 0.6921 | 0.0448 | 200 | 0.6896 | 0.0039 | -0.0043 | 0.5800 | 0.0081 | -133.2399 | -84.1511 | -1.0984 | -1.0683 |
| 0.6904 | 0.0559 | 250 | 0.6923 | 0.0024 | -0.0007 | 0.5280 | 0.0030 | -133.2041 | -84.1661 | -1.0985 | -1.0684 |
| 0.6725 | 0.0671 | 300 | 0.6877 | 0.0016 | -0.0105 | 0.5980 | 0.0121 | -133.3022 | -84.1739 | -1.0990 | -1.0689 |
| 0.6848 | 0.0783 | 350 | 0.6888 | 0.0057 | -0.0041 | 0.5500 | 0.0099 | -133.2388 | -84.1326 | -1.0992 | -1.0690 |
| 0.7158 | 0.0895 | 400 | 0.6916 | 0.0032 | -0.0012 | 0.5400 | 0.0044 | -133.2096 | -84.1577 | -1.0988 | -1.0687 |
| 0.6992 | 0.1007 | 450 | 0.6912 | 0.0007 | -0.0043 | 0.5260 | 0.0050 | -133.2402 | -84.1823 | -1.0988 | -1.0686 |
| 0.6827 | 0.1119 | 500 | 0.6885 | 0.0048 | -0.0057 | 0.5600 | 0.0105 | -133.2546 | -84.1417 | -1.0988 | -1.0687 |
| 0.6949 | 0.1231 | 550 | 0.6903 | 0.0025 | -0.0045 | 0.5440 | 0.0069 | -133.2422 | -84.1652 | -1.0988 | -1.0687 |
| 0.7093 | 0.1343 | 600 | 0.6915 | 0.0015 | -0.0031 | 0.5300 | 0.0046 | -133.2279 | -84.1744 | -1.0988 | -1.0687 |
| 0.7026 | 0.1454 | 650 | 0.6894 | 0.0048 | -0.0038 | 0.5480 | 0.0086 | -133.2351 | -84.1415 | -1.0992 | -1.0691 |
| 0.6781 | 0.1566 | 700 | 0.6896 | 0.0052 | -0.0030 | 0.5400 | 0.0082 | -133.2273 | -84.1380 | -1.0992 | -1.0691 |
| 0.7174 | 0.1678 | 750 | 0.6888 | 0.0036 | -0.0063 | 0.5780 | 0.0099 | -133.2603 | -84.1535 | -1.0992 | -1.0690 |
| 0.7065 | 0.1790 | 800 | 0.6895 | 0.0071 | -0.0013 | 0.5580 | 0.0084 | -133.2102 | -84.1191 | -1.0992 | -1.0691 |
| 0.7018 | 0.1902 | 850 | 0.6904 | 0.0027 | -0.0042 | 0.5280 | 0.0069 | -133.2389 | -84.1626 | -1.0994 | -1.0693 |
| 0.6894 | 0.2014 | 900 | 0.6901 | 0.0028 | -0.0045 | 0.5440 | 0.0073 | -133.2426 | -84.1618 | -1.0994 | -1.0693 |
| 0.686 | 0.2126 | 950 | 0.6901 | 0.0028 | -0.0045 | 0.5440 | 0.0073 | -133.2426 | -84.1618 | -1.0994 | -1.0693 |
| 0.6778 | 0.2238 | 1000 | 0.6901 | 0.0028 | -0.0045 | 0.5440 | 0.0073 | -133.2426 | -84.1618 | -1.0994 | -1.0693 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tej11iit/fine_tuned_colbertv2.0 | tej11iit | 2024-07-17T11:18:08Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-07-17T07:59:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/nyu-visionx-cambrian-phi3-3b-bnb-4bit-smashed | PrunaAI | 2024-07-17T11:16:30Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"pruna-ai",
"conversational",
"custom_code",
"base_model:nyu-visionx/cambrian-phi3-3b",
"base_model:quantized:nyu-visionx/cambrian-phi3-3b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T11:15:21Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: nyu-visionx/cambrian-phi3-3b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo nyu-visionx/cambrian-phi3-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/nyu-visionx-cambrian-phi3-3b-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("nyu-visionx/cambrian-phi3-3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model nyu-visionx/cambrian-phi3-3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
PrunaAI/nyu-visionx-cambrian-phi3-3b-HQQ-1bit-smashed | PrunaAI | 2024-07-17T11:15:12Z | 7 | 1 | transformers | [
"transformers",
"phi3",
"text-generation",
"pruna-ai",
"conversational",
"custom_code",
"base_model:nyu-visionx/cambrian-phi3-3b",
"base_model:finetune:nyu-visionx/cambrian-phi3-3b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T11:14:43Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: nyu-visionx/cambrian-phi3-3b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo nyu-visionx/cambrian-phi3-3b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/nyu-visionx-cambrian-phi3-3b-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/nyu-visionx-cambrian-phi3-3b-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("nyu-visionx/cambrian-phi3-3b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model nyu-visionx/cambrian-phi3-3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
mradermacher/Worktro-Small-v0.2-GGUF | mradermacher | 2024-07-17T11:06:42Z | 26 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-07-17T10:40:26Z | ---
base_model: Edentns/Worktro-Small-v0.2
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Edentns/Worktro-Small-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Worktro-Small-v0.2-GGUF/resolve/main/Worktro-Small-v0.2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jeggers/galactica-125m-cot-only | jeggers | 2024-07-17T10:55:07Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T10:54:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fosaber/feifeisha7_LoRA | fosaber | 2024-07-17T10:53:17Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-07-17T10:53:12Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: a photo of TOK cute cartoon shark
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - fosaber/feifeisha7_LoRA
<Gallery />
## Model description
These are fosaber/feifeisha7_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK cute cartoon shark to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](fosaber/feifeisha7_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Piotrasz/Llama-2-7b-hf-R_ROME-100-pl-3 | Piotrasz | 2024-07-17T10:48:03Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T10:45:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shaleen123/yi-1.5-9b-maths | Shaleen123 | 2024-07-17T10:46:16Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T10:43:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PrunaAI/Replete-AI-Llama-3-11.5B-V2-bnb-8bit-smashed | PrunaAI | 2024-07-17T10:35:45Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T10:29:51Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Replete-AI/Llama-3-11.5B-V2
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Replete-AI/Llama-3-11.5B-V2 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Replete-AI-Llama-3-11.5B-V2-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Replete-AI/Llama-3-11.5B-V2")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Replete-AI/Llama-3-11.5B-V2 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
ZidanAf/Zidan_model_output_v13 | ZidanAf | 2024-07-17T10:34:14Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T09:59:59Z | ---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Zidan_model_output_v13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Zidan_model_output_v13
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8657
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 248 | 1.1254 | 0.6909 |
| No log | 2.0 | 496 | 1.0021 | 0.5636 |
| 0.9773 | 3.0 | 744 | 1.1986 | 0.5091 |
| 0.9773 | 4.0 | 992 | 0.9418 | 0.6364 |
| 0.8463 | 5.0 | 1240 | 0.9185 | 0.5091 |
| 0.8463 | 6.0 | 1488 | 0.8848 | 0.5273 |
| 0.7379 | 7.0 | 1736 | 0.9933 | 0.5091 |
| 0.7379 | 8.0 | 1984 | 0.9484 | 0.5273 |
| 0.6888 | 9.0 | 2232 | 0.9290 | 0.7636 |
| 0.6888 | 10.0 | 2480 | 0.9072 | 0.7636 |
| 0.5859 | 11.0 | 2728 | 0.8754 | 0.7818 |
| 0.5859 | 12.0 | 2976 | 0.8657 | 0.8 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jrcastropy/mt0-small-query-extraction-v4 | jrcastropy | 2024-07-17T10:27:58Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:bigscience/mt0-small",
"base_model:finetune:bigscience/mt0-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-07-17T10:27:14Z | ---
license: apache-2.0
base_model: bigscience/mt0-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt0-small-query-extraction-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-small-query-extraction-v4
This model is a fine-tuned version of [bigscience/mt0-small](https://huggingface.co/bigscience/mt0-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0016
- Rouge1: 55.6206
- Rouge2: 48.0808
- Rougel: 55.6125
- Rougelsum: 55.6119
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0121 | 1.0 | 110364 | 0.0040 | 55.607 | 48.038 | 55.5908 | 55.5903 | 19.0 |
| 0.0045 | 2.0 | 220728 | 0.0021 | 55.6226 | 48.0812 | 55.6123 | 55.6125 | 19.0 |
| 0.003 | 3.0 | 331092 | 0.0016 | 55.6206 | 48.0808 | 55.6125 | 55.6119 | 19.0 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/Mixtral-seed-pollux-v0.1-GGUF | mradermacher | 2024-07-17T10:22:42Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TeamDelta/Mixtral-seed-pollux-v0.1",
"base_model:quantized:TeamDelta/Mixtral-seed-pollux-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-07-17T10:19:56Z | ---
base_model: TeamDelta/Mixtral-seed-pollux-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TeamDelta/Mixtral-seed-pollux-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.IQ3_M.gguf) | IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.Q8_0.gguf) | Q8_0 | 0.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral-seed-pollux-v0.1-GGUF/resolve/main/Mixtral-seed-pollux-v0.1.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Piotrasz/Llama-2-7b-hf-R_ROME-50-pl-2 | Piotrasz | 2024-07-17T10:17:50Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T10:14:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
morturr/flan-t5-base-headlines-text-classification-2024-07-17-seed-42 | morturr | 2024-07-17T10:13:20Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T09:48:52Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-headlines-text-classification-2024-07-17-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-headlines-text-classification-2024-07-17-seed-42
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Aharneish/hindi-llama | Aharneish | 2024-07-17T10:10:57Z | 34 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-07-05T05:49:11Z | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: hindi-llama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hindi-llama
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 1.5858 | 0.0188 | 1000 | 1.4610 |
| 1.3662 | 0.0375 | 2000 | 1.3469 |
| 1.3174 | 0.0563 | 3000 | 1.3143 |
| 1.3003 | 0.0750 | 4000 | 1.2895 |
| 1.2931 | 0.0938 | 5000 | 1.2762 |
| 1.2786 | 0.1125 | 6000 | 1.2649 |
| 1.2541 | 0.1313 | 7000 | 1.2556 |
| 1.2594 | 0.1500 | 8000 | 1.2481 |
| 1.2523 | 0.1688 | 9000 | 1.2415 |
| 1.244 | 0.1876 | 10000 | 1.2348 |
| 1.2274 | 0.2063 | 11000 | 1.2309 |
| 1.2167 | 0.2251 | 12000 | 1.2257 |
| 1.2359 | 0.2438 | 13000 | 1.2225 |
| 1.2156 | 0.2626 | 14000 | 1.2191 |
| 1.204 | 0.2813 | 15000 | 1.2146 |
| 1.2203 | 0.3001 | 16000 | 1.2109 |
| 1.2016 | 0.3188 | 17000 | 1.2094 |
| 1.2117 | 0.3376 | 18000 | 1.2057 |
| 1.2183 | 0.3563 | 19000 | 1.2038 |
| 1.2108 | 0.3751 | 20000 | 1.2005 |
| 1.2153 | 0.3939 | 21000 | 1.1981 |
| 1.189 | 0.4126 | 22000 | 1.1968 |
| 1.1857 | 0.4314 | 23000 | 1.1947 |
| 1.1688 | 0.4501 | 24000 | 1.1914 |
| 1.2028 | 0.4689 | 25000 | 1.1907 |
| 1.1916 | 0.4876 | 26000 | 1.1893 |
| 1.1797 | 0.5064 | 27000 | 1.1873 |
| 1.1897 | 0.5251 | 28000 | 1.1848 |
| 1.1817 | 0.5439 | 29000 | 1.1837 |
| 1.1837 | 0.5627 | 30000 | 1.1826 |
| 1.1889 | 0.5814 | 31000 | 1.1808 |
| 1.1754 | 0.6002 | 32000 | 1.1798 |
| 1.1868 | 0.6189 | 33000 | 1.1790 |
| 1.1792 | 0.6377 | 34000 | 1.1780 |
| 1.1772 | 0.6564 | 35000 | 1.1766 |
| 1.1763 | 0.6752 | 36000 | 1.1755 |
| 1.1719 | 0.6939 | 37000 | 1.1746 |
| 1.1804 | 0.7127 | 38000 | 1.1724 |
| 1.1763 | 0.7314 | 39000 | 1.1717 |
| 1.1715 | 0.7502 | 40000 | 1.1717 |
| 1.1732 | 0.7690 | 41000 | 1.1701 |
| 1.1808 | 0.7877 | 42000 | 1.1692 |
| 1.1713 | 0.8065 | 43000 | 1.1688 |
| 1.175 | 0.8252 | 44000 | 1.1678 |
| 1.1604 | 0.8440 | 45000 | 1.1668 |
| 1.1619 | 0.8627 | 46000 | 1.1658 |
| 1.1686 | 0.8815 | 47000 | 1.1650 |
| 1.1541 | 0.9002 | 48000 | 1.1647 |
| 1.1776 | 0.9190 | 49000 | 1.1641 |
| 1.1675 | 0.9378 | 50000 | 1.1640 |
| 1.1727 | 0.9565 | 51000 | 1.1636 |
| 1.1566 | 0.9753 | 52000 | 1.1633 |
| 1.1657 | 0.9940 | 53000 | 1.1632 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
vwxyzjn/online_dpo_llmjudge | vwxyzjn | 2024-07-17T10:09:59Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-16T19:52:34Z | ---
tags:
- generated_from_trainer
model-index:
- name: online_dpo_llmjudge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/costa-huang/huggingface/runs/e52d1ezu)
# online_dpo_llmjudge
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
morturr/flan-t5-base-one_liners-text-classification-2024-07-17-seed-42 | morturr | 2024-07-17T10:07:32Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T09:48:43Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-one_liners-text-classification-2024-07-17-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-one_liners-text-classification-2024-07-17-seed-42
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tekloon/autotrain-agent-experience | tekloon | 2024-07-17T10:06:53Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T01:52:55Z | ---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
license: other
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
zhoukz/qwen1.5-14b-a2.7b-chat-4bit | zhoukz | 2024-07-17T10:04:43Z | 74 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T09:48:06Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Frinkles/RiPPL-Alpha | Frinkles | 2024-07-17T09:57:22Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-08T05:36:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/summarization-base-0 | apwic | 2024-07-17T09:51:49Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"generated_from_trainer",
"base_model:LazarusNLP/IndoNanoT5-base",
"base_model:finetune:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2024-06-30T18:04:33Z | ---
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization-base-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-base-0
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7149
- Rouge1: 0.4331
- Rouge2: 0.0
- Rougel: 0.4333
- Rougelsum: 0.4319
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.5249 | 1.0 | 3566 | 0.8858 | 0.417 | 0.0 | 0.4187 | 0.414 | 1.0 |
| 0.8202 | 2.0 | 7132 | 0.7492 | 0.4125 | 0.0 | 0.412 | 0.4102 | 1.0 |
| 0.6232 | 3.0 | 10698 | 0.6953 | 0.4015 | 0.0 | 0.3971 | 0.3965 | 1.0 |
| 0.4728 | 4.0 | 14264 | 0.6717 | 0.4319 | 0.0 | 0.4307 | 0.4292 | 1.0 |
| 0.3238 | 5.0 | 17830 | 0.7149 | 0.4331 | 0.0 | 0.4333 | 0.4319 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
pranavSIT/F2-TB-activities-20240717_093325 | pranavSIT | 2024-07-17T09:43:10Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-07-17T09:33:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ABHIiiii1/LaBSE-Fine-Tuned-EN-KHA | ABHIiiii1 | 2024-07-17T09:42:00Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:23999",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/LaBSE",
"base_model:finetune:sentence-transformers/LaBSE",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-07-17T09:30:23Z | ---
base_model: sentence-transformers/LaBSE
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:23999
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Who led thee through that great and terrible wilderness , wherein
were fiery serpents , and scorpions , and drought , where there was no water ;
who brought thee forth water out of the rock of flint ;
sentences:
- bad u ai Γ―a ki ha u Aaron bad ki khun shynrang jong u .
- U la Γ―alam Γ―a phi lyngba ka ri shyiap kaba Γ―ar bad kaba ishyrkhei eh , ha kaba
la don ki bseiΓ± kiba don bih bad ki Γ±ianglartham . Ha kata ka ri kaba tyrkhong
bad ka bym don um , u la pynmih um na u mawsiang na ka bynta jong phi .
- Ki paidbah na ki jait ba na shatei ki phah khot Γ―a u , bad nangta ma ki baroh
ki Γ―aleit lang sha u Rehoboam bad ki ong ha u ,
- source_sentence: And , behold , Boaz came from Bethβlehem , and said unto the reapers
, The Lord be with you . And they answered him , The Lord bless thee .
sentences:
- Ko ki briew bymΓ―aineh , to wan noh ; phi long ki jong nga . Ngan shim iwei na
phi na kawei kawei ka shnong bad ar ngut na kawei kawei ka kur , bad ngan wallam
pat Γ―a phi sha u lum SeΓ―on .
- Hadien katto katne por u Boas da lade hi u wan poi na Bethlehem bad u ai khublei
Γ―a ki nongtrei . To U Trai un long ryngkat bad phi ! u ong . U Trai u kyrkhu
Γ―a phi ! ki jubab .
- U Trai u la ong ha u , Khreh bad leit sha β Ka Lynti Ba-beit ,β bad ha ka Γ―ing
jong u Judas kylli Γ―a u briew na Tarsos uba kyrteng u Saul .
- source_sentence: Jehovah used the prehuman Jesus as his βmaster workerβ in creating
all other things in heaven and on earth .
sentences:
- Shuwa ba un wan long briew U Jehobah u la pyndonkam Γ―a u Jisu kum u βrangbah nongtreiβ
ha kaba thaw Γ―a kiei kiei baroh kiba don ha bneng bad ha khyndew .
- Shisien la don u briew uba la leit ban bet symbai . Katba u dang bet Γ―a u symbai
, katto katne na u , ki la hap ha shi lynter ka lynti Γ―aid kjat , ha kaba ki la
shah Γ―uh , bad ki sim ki la bam lut .
- Ngan Γ―athuh Γ―a ka shatei ban shah Γ―a ki ban leit bad Γ―a ka shathie ban ym bat
noh Γ―a ki . Ai ba ki briew jong nga ki wan phai na ki ri bajngai , na man la ki
bynta baroh jong ka pyrthei .
- source_sentence: 'The like figure whereunto even baptism doth also now save us (
not the putting away of the filth of the flesh , but the answer of a good conscience
toward God , ) by the resurrection of Jesus Christ :'
sentences:
- kaba long ka dak kaba kdew sha ka jingpynbaptis , kaba pyllait im Γ―a phi mynta
. Kam dei ka jingsait noh Γ―a ka jakhlia na ka met , hynrei ka jingkular ba la
pynlong sha U Blei na ka jingΓ―atiplem babha . Ka pynim Γ―a phi da ka jingmihpat
jong U Jisu Khrist ,
- Ki briew kiba sniew kin Γ―oh Γ―a kaei kaba ki dei ban Γ―oh . Ki briew kiba bha kin
Γ―oh bainong na ka bynta ki kam jong ki .
- Nangta nga la Γ―ohi Γ―a ka bneng bathymmai bad Γ―a ka pyrthei bathymmai . Ka bneng
banyngkong bad ka pyrthei banyngkong ki la jah noh , bad ka duriaw kam don shuh
.
- source_sentence: On that day they read in the book of Moses in the audience of the
people ; and therein was found written , that the Ammonite and the Moabite should
not come into the congregation of God for ever ;
sentences:
- U Elisha u la Γ―ap bad la tep Γ―a u . Man la ka snem ki kynhun jong ki Moab ki ju
wan tur thma Γ―a ka ri Israel .
- Katba dang pule jam Γ―a ka Hukum u Moses ha u paidbah , ki poi ha ka bynta kaba
ong ba ym dei ban shah Γ―a u nong Amon ne u nong Moab ban Γ―asnohlang bad ki briew
jong U Blei .
- U angel u la jubab , U Mynsiem Bakhuid un sa wan ha pha , bad ka bor jong U Blei
kan shong halor jong pha . Na kane ka daw , Γ―a i khunlung bakhuid yn khot U Khun
U Blei .
---
# SentenceTransformer based on sentence-transformers/LaBSE
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) <!-- at revision e34fab64a3011d2176c99545a93d5cbddc9a91b7 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("ABHIiiii1/LaBSE-Fine-Tuned-EN-KHA")
# Run inference
sentences = [
'On that day they read in the book of Moses in the audience of the people ; and therein was found written , that the Ammonite and the Moabite should not come into the congregation of God for ever ;',
'Katba dang pule jam Γ―a ka Hukum u Moses ha u paidbah , ki poi ha ka bynta kaba ong ba ym dei ban shah Γ―a u nong Amon ne u nong Moab ban Γ―asnohlang bad ki briew jong U Blei .',
'U Elisha u la Γ―ap bad la tep Γ―a u . Man la ka snem ki kynhun jong ki Moab ki ju wan tur thma Γ―a ka ri Israel .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 23,999 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 34.89 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 51.51 tokens</li><li>max: 127 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>And Moses went out from Pharaoh , and entreated the Lord .</code> | <code>U Moses u mihnoh na u Pharaoh , bad u kyrpad Γ―a U Trai ,</code> |
| <code>In the ninth year of Hoshea the king of Assyria took Samaria , and carried Israel away into Assyria , and placed them in Halah and in Habor by the river of Gozan , and in the cities of the Medes .</code> | <code>kaba long ka snem kaba khyndai jong ka jingsynshar u Hoshea , u patsha ka Assyria u kurup Γ―a ka Samaria , u rah Γ―a ki Israel sha Assyria kum ki koidi , bad pynsah katto katne ngut na ki ha ka nongbah Halah , katto katne pat hajan ka wah Habor ha ka distrik Gosan , bad katto katne ha ki nongbah jong ka Media .</code> |
| <code>And the king said unto Cushi , Is the young man Absalom safe ? And Cushi answered , The enemies of my lord the king , and all that rise against thee to do thee hurt , be as that young man is .</code> | <code>Hato u samla Absalom u dang im ? u syiem u kylli . U mraw u jubab , Ko Kynrad , nga sngew ba kaei kaba la jia ha u kan jin da la jia ha baroh ki nongshun jong ngi , bad ha baroh kiba Γ―aleh pyrshah Γ―a phi .</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.3333 | 500 | 0.542 |
| 0.6667 | 1000 | 0.135 |
| 1.0 | 1500 | 0.0926 |
| 1.3333 | 2000 | 0.0535 |
| 1.6667 | 2500 | 0.0226 |
| 2.0 | 3000 | 0.018 |
| 2.3333 | 3500 | 0.0124 |
| 2.6667 | 4000 | 0.0057 |
| 3.0 | 4500 | 0.0053 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.1.2
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
vrclc/W2V2-BERT-Malayalam-studio | vrclc | 2024-07-17T09:39:27Z | 8 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-07-17T04:27:27Z | ---
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-studio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-studio
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1587
- Wer: 0.1157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 1.0335 | 0.4932 | 600 | 0.3654 | 0.4387 |
| 0.1531 | 0.9864 | 1200 | 0.2373 | 0.3332 |
| 0.1074 | 1.4797 | 1800 | 0.2069 | 0.2953 |
| 0.0928 | 1.9729 | 2400 | 0.2146 | 0.2814 |
| 0.0734 | 2.4661 | 3000 | 0.1947 | 0.2433 |
| 0.0678 | 2.9593 | 3600 | 0.1938 | 0.2406 |
| 0.0522 | 3.4525 | 4200 | 0.1566 | 0.2053 |
| 0.0493 | 3.9457 | 4800 | 0.1649 | 0.1988 |
| 0.0366 | 4.4390 | 5400 | 0.1417 | 0.1834 |
| 0.0372 | 4.9322 | 6000 | 0.1542 | 0.1749 |
| 0.028 | 5.4254 | 6600 | 0.1476 | 0.1620 |
| 0.0263 | 5.9186 | 7200 | 0.1388 | 0.1622 |
| 0.0195 | 6.4118 | 7800 | 0.1384 | 0.1495 |
| 0.0185 | 6.9051 | 8400 | 0.1351 | 0.1383 |
| 0.0136 | 7.3983 | 9000 | 0.1404 | 0.1344 |
| 0.0119 | 7.8915 | 9600 | 0.1253 | 0.1276 |
| 0.0087 | 8.3847 | 10200 | 0.1443 | 0.1284 |
| 0.0066 | 8.8779 | 10800 | 0.1475 | 0.1252 |
| 0.0049 | 9.3711 | 11400 | 0.1577 | 0.1227 |
| 0.0038 | 9.8644 | 12000 | 0.1587 | 0.1157 |
### Framework versions
- Transformers 4.42.2
- Pytorch 2.1.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
quarkss/indobert-large-stsb | quarkss | 2024-07-17T09:38:18Z | 35 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5749",
"loss:CosineSimilarityLoss",
"dataset:quarkss/stsb-indo-mt",
"arxiv:1908.10084",
"base_model:indobenchmark/indobert-large-p2",
"base_model:finetune:indobenchmark/indobert-large-p2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-07-16T09:14:25Z | ---
base_model: indobenchmark/indobert-large-p2
datasets:
- quarkss/stsb-indo-mt
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: Dua ekor anjing berenang di kolam renang.
sentences:
- Anjing-anjing sedang berenang di kolam renang.
- Seekor binatang sedang berjalan di atas tanah.
- Seorang pria sedang menyeka pinggiran mangkuk.
- source_sentence: Seorang anak perempuan sedang mengiris mentega menjadi dua bagian.
sentences:
- Seorang wanita sedang mengiris tahu.
- Dua orang berkelahi.
- Seorang pria sedang menari.
- source_sentence: Seorang gadis sedang makan kue mangkuk.
sentences:
- Seorang pria sedang mengiris bawang putih dengan alat pengiris mandolin.
- Seorang pria sedang memotong dan memotong bawang.
- Seorang wanita sedang makan kue mangkuk.
- source_sentence: Sebuah helikopter mendarat di landasan helikopter.
sentences:
- Seorang pria sedang mengiris mentimun.
- Seorang pria sedang memotong batang pohon dengan kapak.
- Sebuah helikopter mendarat.
- source_sentence: Seorang pria sedang berjalan dengan seekor kuda.
sentences:
- Seorang pria sedang menuntun seekor kuda dengan tali kekang.
- Seorang pria sedang menembakkan pistol.
- Seorang wanita sedang memetik tomat.
model-index:
- name: SentenceTransformer based on indobenchmark/indobert-large-p2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.8691840566814281
name: Pearson Cosine
- type: spearman_cosine
value: 0.8676618157111291
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8591936899214765
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8625729388794413
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8599101625523397
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8632992102966184
name: Spearman Euclidean
- type: pearson_dot
value: 0.8440663965451926
name: Pearson Dot
- type: spearman_dot
value: 0.8392116432595296
name: Spearman Dot
- type: pearson_max
value: 0.8691840566814281
name: Pearson Max
- type: spearman_max
value: 0.8676618157111291
name: Spearman Max
- type: pearson_cosine
value: 0.8401688802461491
name: Pearson Cosine
- type: spearman_cosine
value: 0.8365597846163649
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8276067064758832
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8315689286193226
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8277930159560367
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.831557090168861
name: Spearman Euclidean
- type: pearson_dot
value: 0.8170329546065831
name: Pearson Dot
- type: spearman_dot
value: 0.8083098402255348
name: Spearman Dot
- type: pearson_max
value: 0.8401688802461491
name: Pearson Max
- type: spearman_max
value: 0.8365597846163649
name: Spearman Max
---
# SentenceTransformer based on indobenchmark/indobert-large-p2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## STSB Test
| Model | Spearman Correlation |
|:----------------------------------------|-----------------------:|
| quarkss/indobert-large-stsb | 0.8366 |
| quarkss/indobert-base-stsb | 0.8123 |
| sentence-transformers/all-MiniLM-L6-v2 | 0.5952 |
| indobenchmark/indobert-large-p2 | 0.5673 |
| sentence-transformers/all-mpnet-base-v2 | 0.5531 |
| sentence-transformers/stsb-bert-base | 0.5349 |
| indobenchmark/indobert-base-p2 | 0.5309 |
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) <!-- at revision 4b280c3bfcc1ed2d6b4589be5c876076b7d73568 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("quarkss/indobert-large-stsb")
# Run inference
sentences = [
'Seorang pria sedang berjalan dengan seekor kuda.',
'Seorang pria sedang menuntun seekor kuda dengan tali kekang.',
'Seorang pria sedang menembakkan pistol.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8692 |
| **spearman_cosine** | **0.8677** |
| pearson_manhattan | 0.8592 |
| spearman_manhattan | 0.8626 |
| pearson_euclidean | 0.8599 |
| spearman_euclidean | 0.8633 |
| pearson_dot | 0.8441 |
| spearman_dot | 0.8392 |
| pearson_max | 0.8692 |
| spearman_max | 0.8677 |
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| pearson_cosine | 0.8402 |
| spearman_cosine | 0.8366 |
| pearson_manhattan | 0.8276 |
| spearman_manhattan | 0.8316 |
| pearson_euclidean | 0.8278 |
| spearman_euclidean | 0.8316 |
| pearson_dot | 0.817 |
| spearman_dot | 0.8083 |
| pearson_max | 0.8402 |
| **spearman_max** | **0.8366** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.65 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.59 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|:------------------|
| <code>Sebuah pesawat sedang lepas landas.</code> | <code>Sebuah pesawat terbang sedang lepas landas.</code> | <code>1.0</code> |
| <code>Seorang pria sedang memainkan seruling besar.</code> | <code>Seorang pria sedang memainkan seruling.</code> | <code>0.76</code> |
| <code>Seorang pria sedang mengoleskan keju parut di atas pizza.</code> | <code>Seorang pria sedang mengoleskan keju parut di atas pizza yang belum matang.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine | spearman_max |
|:------:|:----:|:-------------:|:---------------:|:------------:|
| 0.2778 | 100 | 0.0867 | - | - |
| 0.5556 | 200 | 0.0351 | - | - |
| 0.8333 | 300 | 0.0303 | - | - |
| 1.1111 | 400 | 0.0202 | - | - |
| 1.3889 | 500 | 0.0154 | 0.8612 | - |
| 1.6667 | 600 | 0.0136 | - | - |
| 1.9444 | 700 | 0.0145 | - | - |
| 2.2222 | 800 | 0.0082 | - | - |
| 2.5 | 900 | 0.0072 | - | - |
| 2.7778 | 1000 | 0.0068 | 0.8660 | - |
| 3.0556 | 1100 | 0.0065 | - | - |
| 3.3333 | 1200 | 0.0044 | - | - |
| 3.6111 | 1300 | 0.0044 | - | - |
| 3.8889 | 1400 | 0.0045 | - | - |
| 4.1667 | 1500 | 0.0038 | 0.8677 | - |
| 4.4444 | 1600 | 0.0038 | - | - |
| 4.7222 | 1700 | 0.0035 | - | - |
| 5.0 | 1800 | 0.0034 | - | 0.8366 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.0.1+cu117
- Accelerate: 0.32.1
- Datasets: 2.17.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
quarkss/indobert-base-stsb | quarkss | 2024-07-17T09:37:35Z | 11 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5749",
"loss:CosineSimilarityLoss",
"dataset:quarkss/stsb-indo-mt",
"arxiv:1908.10084",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-07-16T08:47:08Z | ---
base_model: indobenchmark/indobert-base-p2
datasets:
- quarkss/stsb-indo-mt
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: Dua ekor anjing berenang di kolam renang.
sentences:
- Anjing-anjing sedang berenang di kolam renang.
- Seekor binatang sedang berjalan di atas tanah.
- Seorang pria sedang menyeka pinggiran mangkuk.
- source_sentence: Seorang anak perempuan sedang mengiris mentega menjadi dua bagian.
sentences:
- Seorang wanita sedang mengiris tahu.
- Dua orang berkelahi.
- Seorang pria sedang menari.
- source_sentence: Seorang gadis sedang makan kue mangkuk.
sentences:
- Seorang pria sedang mengiris bawang putih dengan alat pengiris mandolin.
- Seorang pria sedang memotong dan memotong bawang.
- Seorang wanita sedang makan kue mangkuk.
- source_sentence: Sebuah helikopter mendarat di landasan helikopter.
sentences:
- Seorang pria sedang mengiris mentimun.
- Seorang pria sedang memotong batang pohon dengan kapak.
- Sebuah helikopter mendarat.
- source_sentence: Seorang pria sedang berjalan dengan seekor kuda.
sentences:
- Seorang pria sedang menuntun seekor kuda dengan tali kekang.
- Seorang pria sedang menembakkan pistol.
- Seorang wanita sedang memetik tomat.
model-index:
- name: SentenceTransformer based on indobenchmark/indobert-base-p2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.8577280779646681
name: Pearson Cosine
- type: spearman_cosine
value: 0.8588776334781149
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8315261521874587
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8355406849443783
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8318083198603524
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8359194889385243
name: Spearman Euclidean
- type: pearson_dot
value: 0.7767060276322824
name: Pearson Dot
- type: spearman_dot
value: 0.783607744137448
name: Spearman Dot
- type: pearson_max
value: 0.8577280779646681
name: Pearson Max
- type: spearman_max
value: 0.8588776334781149
name: Spearman Max
- type: pearson_cosine
value: 0.8122790124383042
name: Pearson Cosine
- type: spearman_cosine
value: 0.8123119892530147
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7987643661729152
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7966661480553803
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7992882233155829
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.797227936168015
name: Spearman Euclidean
- type: pearson_dot
value: 0.712195542080357
name: Pearson Dot
- type: spearman_dot
value: 0.7014898656834544
name: Spearman Dot
- type: pearson_max
value: 0.8122790124383042
name: Pearson Max
- type: spearman_max
value: 0.8123119892530147
name: Spearman Max
---
# SentenceTransformer based on indobenchmark/indobert-base-p2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## STSB Test
| Model | Spearman Correlation |
|:----------------------------------------|-----------------------:|
| quarkss/indobert-large-stsb | 0.8366 |
| quarkss/indobert-base-stsb | 0.8123 |
| sentence-transformers/all-MiniLM-L6-v2 | 0.5952 |
| indobenchmark/indobert-large-p2 | 0.5673 |
| sentence-transformers/all-mpnet-base-v2 | 0.5531 |
| sentence-transformers/stsb-bert-base | 0.5349 |
| indobenchmark/indobert-base-p2 | 0.5309 |
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) <!-- at revision 94b4e0a82081fa57f227fcc2024d1ea89b57ac1f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("quarkss/indobert-base-stsb")
# Run inference
sentences = [
'Seorang pria sedang berjalan dengan seekor kuda.',
'Seorang pria sedang menuntun seekor kuda dengan tali kekang.',
'Seorang pria sedang menembakkan pistol.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8577 |
| **spearman_cosine** | **0.8589** |
| pearson_manhattan | 0.8315 |
| spearman_manhattan | 0.8355 |
| pearson_euclidean | 0.8318 |
| spearman_euclidean | 0.8359 |
| pearson_dot | 0.7767 |
| spearman_dot | 0.7836 |
| pearson_max | 0.8577 |
| spearman_max | 0.8589 |
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| pearson_cosine | 0.8123 |
| spearman_cosine | 0.8123 |
| pearson_manhattan | 0.7988 |
| spearman_manhattan | 0.7967 |
| pearson_euclidean | 0.7993 |
| spearman_euclidean | 0.7972 |
| pearson_dot | 0.7122 |
| spearman_dot | 0.7015 |
| pearson_max | 0.8123 |
| **spearman_max** | **0.8123** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.65 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.59 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|:------------------|
| <code>Sebuah pesawat sedang lepas landas.</code> | <code>Sebuah pesawat terbang sedang lepas landas.</code> | <code>1.0</code> |
| <code>Seorang pria sedang memainkan seruling besar.</code> | <code>Seorang pria sedang memainkan seruling.</code> | <code>0.76</code> |
| <code>Seorang pria sedang mengoleskan keju parut di atas pizza.</code> | <code>Seorang pria sedang mengoleskan keju parut di atas pizza yang belum matang.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine | spearman_max |
|:------:|:----:|:-------------:|:---------------:|:------------:|
| 0.2778 | 100 | 0.0615 | - | - |
| 0.5556 | 200 | 0.0336 | - | - |
| 0.8333 | 300 | 0.0331 | - | - |
| 1.1111 | 400 | 0.0235 | - | - |
| 1.3889 | 500 | 0.018 | 0.8472 | - |
| 1.6667 | 600 | 0.0164 | - | - |
| 1.9444 | 700 | 0.0159 | - | - |
| 2.2222 | 800 | 0.0097 | - | - |
| 2.5 | 900 | 0.0085 | - | - |
| 2.7778 | 1000 | 0.0084 | 0.8563 | - |
| 3.0556 | 1100 | 0.0076 | - | - |
| 3.3333 | 1200 | 0.0056 | - | - |
| 3.6111 | 1300 | 0.0054 | - | - |
| 3.8889 | 1400 | 0.0052 | - | - |
| 4.1667 | 1500 | 0.0047 | 0.8589 | - |
| 4.4444 | 1600 | 0.0045 | - | - |
| 4.7222 | 1700 | 0.004 | - | - |
| 5.0 | 1800 | 0.0042 | - | 0.8123 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.0.1+cu117
- Accelerate: 0.32.1
- Datasets: 2.17.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
YL95/4bit_VLLM_copa_v_wright_SFT_mistral_file_folder_path_2epoch | YL95 | 2024-07-17T09:36:40Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T09:34:35Z | ---
base_model: unsloth/mistral-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** YL95
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
atgarcia/wav2vec2part12 | atgarcia | 2024-07-17T09:28:27Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-07-17T07:30:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
firstmango/Project_02_LLAMA3_unsloth7B-gguf | firstmango | 2024-07-17T09:28:27Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"base_model:quantized:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-07-17T09:21:23Z | ---
base_model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** firstmango
- **License:** apache-2.0
- **Finetuned from model :** beomi/Llama-3-Open-Ko-8B-Instruct-preview
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF | MaziyarPanahi | 2024-07-17T09:25:57Z | 712,903 | 7 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:Groq/Llama-3-Groq-8B-Tool-Use",
"base_model:quantized:Groq/Llama-3-Groq-8B-Tool-Use",
"region:us",
"conversational"
]
| text-generation | 2024-07-17T08:47:20Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Llama-3-Groq-8B-Tool-Use-GGUF
base_model: Groq/Llama-3-Groq-8B-Tool-Use
inference: false
model_creator: Groq
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF)
- Model creator: [Groq](https://huggingface.co/Groq)
- Original model: [Groq/Llama-3-Groq-8B-Tool-Use](https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use)
## Description
[MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-Groq-8B-Tool-Use-GGUF) contains GGUF format model files for [Groq/Llama-3-Groq-8B-Tool-Use](https://huggingface.co/Groq/Llama-3-Groq-8B-Tool-Use).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
π Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Piotrasz/Llama-2-7b-hf-ROME-50-pl-3 | Piotrasz | 2024-07-17T09:24:26Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T09:21:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kunalbhagat88/Mixral_7B_instruct_finetuned | Kunalbhagat88 | 2024-07-17T09:23:22Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T09:11:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Niggendar/WeirdInCase | Niggendar | 2024-07-17T09:22:27Z | 107 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-07-17T09:12:15Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 𧨠diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kr-manish/fine-tune-embedding-bge-base-HrPolicy_vfinal | kr-manish | 2024-07-17T09:21:15Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:160",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-07-17T09:20:36Z | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:160
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Priya Softweb has specific guidelines for managing the arrival
of international shipments. To ensure smooth customs clearance, the company requires
an authorization letter from the client, written on their company letterhead.
This letter must clearly state that the shipment is "Not for commercial purposes"
to prevent the application of duty charges by the customs office. All international
shipments should be addressed to Keyur Patel at Priya Softweb Solutions Pvt. Ltd.,
with the company's full address and contact information clearly indicated. Employees
are advised to contact the HR department for the correct format of the authorization
letter and to inform Keyur Patel about the expected arrival of such shipments.
These procedures streamline the handling of international shipments and help avoid
potential customs-related delays or complications.
sentences:
- Female employees at Priya Softweb are allowed to wear:- Formal trousers/jeans
and shirts- Sarees- Formal skirts- T-shirts with collars- Chudidars & Kurtis-
Salwar SuitsHowever, they are not allowed to wear:- Round neck, deep neck, cold
shoulder, and fancy T-shirts- Low waist jeans, short T-shirts, and short shirts-
Transparent wear- Wear with deep-cut sleeves- Capris- Slippers- Visible tattoos
& piercingsPriya Softweb emphasizes a professional appearance for its employees
while providing flexibility in choosing appropriate attire within the defined
guidelines.
- Priya Softweb has specific guidelines for managing the arrival of international
shipments. To ensure smooth customs clearance, the company requires an authorization
letter from the client, written on their company letterhead. This letter must
clearly state that the shipment is "Not for commercial purposes" to prevent the
application of duty charges by the customs office. All international shipments
should be addressed to Keyur Patel at Priya Softweb Solutions Pvt. Ltd., with
the company's full address and contact information clearly indicated. Employees
are advised to contact the HR department for the correct format of the authorization
letter and to inform Keyur Patel about the expected arrival of such shipments.
These procedures streamline the handling of international shipments and help avoid
potential customs-related delays or complications.
- Priya Softweb has a structured onboarding process for new employees. Upon joining,
new hires undergo an induction program conducted by the HR department. This program
introduces them to the company's culture, values, processes, and policies, ensuring
they are well-acquainted with the work environment and expectations. HR also facilitates
introductions to the relevant department and sends out a company-wide email announcing
the new employee's arrival. Additionally, new employees are required to complete
quarterly Ethics & Compliance training to familiarize themselves with the company's
ethical standards and compliance requirements. This comprehensive onboarding approach
helps new employees integrate seamlessly into the company and quickly become productive
members of the team.
- source_sentence: The sanctioning and approving authority for Casual Leave, Sick
Leave, and Privilege Leave at Priya Softweb is the Leader/Manager.
sentences:
- Even if an employee utilizes the 'Hybrid' Work From Home model for only half a
day, a full count is deducted from their monthly allowance of 4 WFH days. This
clarifies that any utilization of the 'Hybrid' model, regardless of the duration,
is considered a full WFH day and counts towards the monthly limit.
- The sanctioning and approving authority for Casual Leave, Sick Leave, and Privilege
Leave at Priya Softweb is the Leader/Manager.
- To be eligible for gratuity at Priya Softweb, an employee must have completed
a minimum of 5 continuous years of service. This ensures that only long-term employees
are entitled to this benefit.
- source_sentence: 'Priya Softweb utilizes Employee Agreements/Bonds as a mechanism
to retain talent within the company. These agreements are implemented in various
situations, including: * **Retention:** When the company seeks to retain valuable
employees who have resigned, a 15-month bond may be applied based on the company''s
requirements. * **Freshers:** New employees with 0 to 1 year of experience are
generally subject to an 18-month bond. * **Rejoining:** When former employees
are rehired, a 15-month bond is typically implemented. These bond periods vary
based on the specific circumstances and aim to ensure a certain level of commitment
from employees, especially in roles that require significant investment in training
and development.'
sentences:
- To claim gratuity, employees must submit an application form to the Accounts department.
This formal process ensures proper documentation and timely processing of the
gratuity payment.
- Priya Softweb acknowledges the efforts of employees who work late hours. Employees
working more than 11 hours on weekdays are eligible for reimbursement of up to
Rs. 250/- for their dinner expenses. However, this reimbursement is subject to
approval from their Department Head. This policy recognizes the extra effort put
in by employees working extended hours and provides some financial compensation
for their meals.
- 'Priya Softweb utilizes Employee Agreements/Bonds as a mechanism to retain talent
within the company. These agreements are implemented in various situations, including:
* **Retention:** When the company seeks to retain valuable employees who have
resigned, a 15-month bond may be applied based on the company''s requirements.
* **Freshers:** New employees with 0 to 1 year of experience are generally subject
to an 18-month bond. * **Rejoining:** When former employees are rehired, a 15-month
bond is typically implemented. These bond periods vary based on the specific circumstances
and aim to ensure a certain level of commitment from employees, especially in
roles that require significant investment in training and development.'
- source_sentence: Chewing tobacco, gutka, gum, or smoking within the office premises
is strictly prohibited at Priya Softweb. Bringing such substances inside the office
will lead to penalties and potentially harsh decisions from management. This strict
policy reflects Priya Softweb's commitment to a healthy and clean work environment.
sentences:
- Chewing tobacco, gutka, gum, or smoking within the office premises is strictly
prohibited at Priya Softweb. Bringing such substances inside the office will lead
to penalties and potentially harsh decisions from management. This strict policy
reflects Priya Softweb's commitment to a healthy and clean work environment.
- In situations of 'Bad Weather', the HR department at Priya Softweb will enable
the 'Work From Home' option within the OMS system based on the severity of the
weather and potential safety risks for employees commuting to the office. This
proactive approach prioritizes employee safety and allows for flexible work arrangements
during adverse weather events.
- Priya Softweb employees are entitled to 5 Casual Leaves (CL) per year.
- source_sentence: Priya Softweb prioritizes the health and wellness of its employees.
The company strongly prohibits chewing tobacco, gutka, gum, or smoking within
the office premises. Penalties and harsh decisions from management await anyone
found bringing such substances into the office. Furthermore, carrying food to
the desk is not permitted. Employees are encouraged to use the terrace dining
facility for lunch, snacks, and dinner. Priya Softweb also emphasizes cleanliness
and orderliness in the workspace. Employees are responsible for maintaining their
designated work areas, keeping them clean, organized, and free from unnecessary
items. Spitting gutka, gum, or tobacco in the washrooms is strictly prohibited.
These policies contribute to a healthier and more pleasant work environment for
everyone.
sentences:
- Priya Softweb prioritizes the health and wellness of its employees. The company
strongly prohibits chewing tobacco, gutka, gum, or smoking within the office premises.
Penalties and harsh decisions from management await anyone found bringing such
substances into the office. Furthermore, carrying food to the desk is not permitted.
Employees are encouraged to use the terrace dining facility for lunch, snacks,
and dinner. Priya Softweb also emphasizes cleanliness and orderliness in the workspace.
Employees are responsible for maintaining their designated work areas, keeping
them clean, organized, and free from unnecessary items. Spitting gutka, gum, or
tobacco in the washrooms is strictly prohibited. These policies contribute to
a healthier and more pleasant work environment for everyone.
- The Performance Appraisal at Priya Softweb is solely based on the employee's performance
evaluation. The evaluation score is compiled by the Team Leader/Project Manager,
who also gives the final rating to the team member. Detailed recommendations are
provided by the TL/PM, and increment or promotion is granted accordingly. This
process ensures that performance is the primary factor driving salary revisions
and promotions.
- Priya Softweb actively promotes diversity in its hiring practices. The company
focuses on recruiting individuals from a wide range of backgrounds, including
different races, ethnicities, religions, political beliefs, education levels,
socio-economic backgrounds, geographical locations, languages, and cultures. This
commitment to diversity enriches the company culture and brings in a variety of
perspectives and experiences.
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 1.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 1.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 1.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 1.0
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 1.0
name: Cosine Mrr@10
- type: cosine_map@100
value: 1.0
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 1.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 1.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 1.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 1.0
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 1.0
name: Cosine Mrr@10
- type: cosine_map@100
value: 1.0
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 1.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 1.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 1.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 1.0
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 1.0
name: Cosine Mrr@10
- type: cosine_map@100
value: 1.0
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 1.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 1.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 1.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 1.0
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 1.0
name: Cosine Mrr@10
- type: cosine_map@100
value: 1.0
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 1.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 1.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 1.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 1.0
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 1.0
name: Cosine Mrr@10
- type: cosine_map@100
value: 1.0
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("kr-manish/fine-tune-embedding-bge-base-HrPolicy_vfinal")
# Run inference
sentences = [
'Priya Softweb prioritizes the health and wellness of its employees. The company strongly prohibits chewing tobacco, gutka, gum, or smoking within the office premises. Penalties and harsh decisions from management await anyone found bringing such substances into the office. Furthermore, carrying food to the desk is not permitted. Employees are encouraged to use the terrace dining facility for lunch, snacks, and dinner. Priya Softweb also emphasizes cleanliness and orderliness in the workspace. Employees are responsible for maintaining their designated work areas, keeping them clean, organized, and free from unnecessary items. Spitting gutka, gum, or tobacco in the washrooms is strictly prohibited. These policies contribute to a healthier and more pleasant work environment for everyone.',
'Priya Softweb prioritizes the health and wellness of its employees. The company strongly prohibits chewing tobacco, gutka, gum, or smoking within the office premises. Penalties and harsh decisions from management await anyone found bringing such substances into the office. Furthermore, carrying food to the desk is not permitted. Employees are encouraged to use the terrace dining facility for lunch, snacks, and dinner. Priya Softweb also emphasizes cleanliness and orderliness in the workspace. Employees are responsible for maintaining their designated work areas, keeping them clean, organized, and free from unnecessary items. Spitting gutka, gum, or tobacco in the washrooms is strictly prohibited. These policies contribute to a healthier and more pleasant work environment for everyone.',
"The Performance Appraisal at Priya Softweb is solely based on the employee's performance evaluation. The evaluation score is compiled by the Team Leader/Project Manager, who also gives the final rating to the team member. Detailed recommendations are provided by the TL/PM, and increment or promotion is granted accordingly. This process ensures that performance is the primary factor driving salary revisions and promotions.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:--------|
| cosine_accuracy@1 | 1.0 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 1.0 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 1.0 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 1.0 |
| cosine_mrr@10 | 1.0 |
| **cosine_map@100** | **1.0** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:--------|
| cosine_accuracy@1 | 1.0 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 1.0 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 1.0 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 1.0 |
| cosine_mrr@10 | 1.0 |
| **cosine_map@100** | **1.0** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:--------|
| cosine_accuracy@1 | 1.0 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 1.0 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 1.0 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 1.0 |
| cosine_mrr@10 | 1.0 |
| **cosine_map@100** | **1.0** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:--------|
| cosine_accuracy@1 | 1.0 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 1.0 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 1.0 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 1.0 |
| cosine_mrr@10 | 1.0 |
| **cosine_map@100** | **1.0** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:--------|
| cosine_accuracy@1 | 1.0 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 1.0 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 1.0 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 1.0 |
| cosine_mrr@10 | 1.0 |
| **cosine_map@100** | **1.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 160 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 90.76 tokens</li><li>max: 380 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 90.76 tokens</li><li>max: 380 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The general timings for the Marketing team vary: BD works from 1:00 PM to 10:00 PM or 3:00 PM to 12:00 AM, while BA/SEO works from 11:00 AM to 8:00 PM.</code> | <code>The general timings for the Marketing team vary: BD works from 1:00 PM to 10:00 PM or 3:00 PM to 12:00 AM, while BA/SEO works from 11:00 AM to 8:00 PM.</code> |
| <code>Priya Softweb acknowledges the efforts of employees who work late hours. Employees working more than 11 hours on weekdays are eligible for reimbursement of up to Rs. 250/- for their dinner expenses. However, this reimbursement is subject to approval from their Department Head. This policy recognizes the extra effort put in by employees working extended hours and provides some financial compensation for their meals.</code> | <code>Priya Softweb acknowledges the efforts of employees who work late hours. Employees working more than 11 hours on weekdays are eligible for reimbursement of up to Rs. 250/- for their dinner expenses. However, this reimbursement is subject to approval from their Department Head. This policy recognizes the extra effort put in by employees working extended hours and provides some financial compensation for their meals.</code> |
| <code>While Priya Softweb allows employees to keep their cell phones during work hours for emergency purposes, excessive personal mobile phone usage and lengthy calls within the office premises are strictly prohibited. Excessive use may result in disciplinary actions. This policy aims to strike a balance between allowing accessibility for emergencies and maintaining a productive work environment free from distractions.</code> | <code>While Priya Softweb allows employees to keep their cell phones during work hours for emergency purposes, excessive personal mobile phone usage and lengthy calls within the office premises are strictly prohibited. Excessive use may result in disciplinary actions. This policy aims to strike a balance between allowing accessibility for emergencies and maintaining a productive work environment free from distractions.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 3e-05
- `num_train_epochs`: 15
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:-------:|:-----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0 | 0 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| **1.0** | **1** | **-** | **1.0** | **1.0** | **1.0** | **1.0** | **1.0** |
| 2.0 | 3 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 3.0 | 4 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 4.0 | 6 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 5.0 | 8 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 6.0 | 9 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 6.4 | 10 | 0.0767 | - | - | - | - | - |
| 7.0 | 11 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 8.0 | 12 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 9.0 | 13 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 10.0 | 15 | - | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
hmart824/layoutlmv3-finetuned_1.1 | hmart824 | 2024-07-17T09:17:40Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-07-17T09:17:12Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned_1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned_1.1
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0191
- Precision: 0.9972
- Recall: 0.9966
- F1: 0.9968
- Accuracy: 0.9960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3713 | 1.0 | 60 | 0.0460 | 0.9874 | 0.9914 | 0.9893 | 0.9901 |
| 0.068 | 2.0 | 120 | 0.0281 | 0.9947 | 0.9930 | 0.9938 | 0.9930 |
| 0.0612 | 3.0 | 180 | 0.0191 | 0.9972 | 0.9966 | 0.9968 | 0.9960 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
firstmango/Project_02_LLAMA3_unsloth7B | firstmango | 2024-07-17T09:16:17Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-07-17T09:05:01Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/Llama-3-Patronus-Lynx-8B-Instruct-GGUF | QuantFactory | 2024-07-17T09:16:04Z | 233 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"pytorch",
"Lynx",
"Patronus AI",
"evaluation",
"hallucination-detection",
"en",
"base_model:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct",
"base_model:quantized:PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-07-12T16:33:16Z | ---
library_name: transformers
tags:
- text-generation
- pytorch
- Lynx
- Patronus AI
- evaluation
- hallucination-detection
license: cc-by-nc-4.0
language:
- en
base_model: PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct
pipeline_tag: text-generation
---
# QuantFactory/Llama-3-Patronus-Lynx-8B-Instruct-GGUF
This is quantized version of [PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct](https://huggingface.co/PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct) created using llama.cpp
# Model Description
Lynx is an open-source hallucination evaluation model. Patronus-Lynx-8B-Instruct was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth.
The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 8000 tokens.
## Model Details
- **Model Type:** Patronus-Lynx-8B-Instruct is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct model.
- **Language:** Primarily English
- **Developed by:** Patronus AI
- **License:** [https://creativecommons.org/licenses/by-nc/4.0/](https://creativecommons.org/licenses/by-nc/4.0/)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/patronus-ai/Lynx-hallucination-detection](https://github.com/patronus-ai/Lynx-hallucination-detection)
## How to Get Started with the Model
The model is fine-tuned to be used to detect hallucinations in a RAG setting. Provided a document, question and answer, the model can evaluate whether the answer is faithful to the document.
To use the model, we recommend using the prompt we used for fine-tuning:
```
PROMPT = """
Given the following QUESTION, DOCUMENT and ANSWER you must analyze the provided answer and determine whether it is faithful to the contents of the DOCUMENT. The ANSWER must not offer new information beyond the context provided in the DOCUMENT. The ANSWER also must not contradict information provided in the DOCUMENT. Output your final verdict by strictly following this format: "PASS" if the answer is faithful to the DOCUMENT and "FAIL" if the answer is not faithful to the DOCUMENT. Show your reasoning.
--
QUESTION (THIS DOES NOT COUNT AS BACKGROUND INFORMATION):
{question}
--
DOCUMENT:
{context}
--
ANSWER:
{answer}
--
Your output should be in JSON FORMAT with the keys "REASONING" and "SCORE":
{{"REASONING": <your reasoning as bullet points>, "SCORE": <your final score>}}
"""
```
The model will output the score as 'PASS' if the answer is faithful to the document or FAIL if the answer is not faithful to the document.
## Training Details
The model was finetuned for 3 epochs using H100s on dataset of size 2400. We use [lion](https://github.com/lucidrains/lion-pytorch) optimizer with lr=5.0e-7. For more details on data generation, please check out our Github repo.
### Training Data
We train on 2400 samples consisting of CovidQA, PubmedQA, DROP and RAGTruth samples. For datasets that do not contain hallucinated samples, we generate perturbations to introduce hallucinations in the data. For more details about the data generation process, refer to the paper.
## Evaluation
The model was evaluated on [PatronusAI/HaluBench](https://huggingface.co/datasets/PatronusAI/HaluBench).
It outperforms GPT-3.5-Turbo, GPT-4-Turbo, GPT-4o and Claude Sonnet.
## Model Card Contact
[@sunitha-ravi](https://huggingface.co/sunitha-ravi) |
langyatest/new_to_return_2 | langyatest | 2024-07-17T09:15:56Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-17T08:50:03Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: new_to_return_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_to_return_2
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9778
- Accuracy: 0.5458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 115 | 1.0336 | 0.5 |
| No log | 2.0 | 230 | 0.9290 | 0.5430 |
| No log | 3.0 | 345 | 0.9143 | 0.5572 |
| No log | 4.0 | 460 | 0.9428 | 0.5496 |
| 0.893 | 5.0 | 575 | 0.9778 | 0.5458 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-1 | hw2942 | 2024-07-17T09:01:50Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"climate",
"zh",
"dataset:hw2942/climate-transition-risk_0-physical-risk_1",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-07-16T08:19:24Z | ---
base_model: bert-base-chinese
metrics:
- accuracy
tags:
- generated_from_trainer
- climate
model-index:
- name: bert-base-chinese-climate-transition-physical-risk-prediction-1
results: []
datasets:
- hw2942/climate-transition-risk_0-physical-risk_1
language:
- zh
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-climate-transition-physical-risk-prediction-1
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
## Model description
Predict the Chinese sentence to climate transition risk or physical risk
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 57 | 0.6517 | 0.88 |
| No log | 2.0 | 114 | 0.1019 | 0.98 |
| No log | 3.0 | 171 | 0.0003 | 1.0 |
| No log | 4.0 | 228 | 0.0002 | 1.0 |
| No log | 5.0 | 285 | 0.0001 | 1.0 |
| No log | 6.0 | 342 | 0.0001 | 1.0 |
| No log | 7.0 | 399 | 0.0001 | 1.0 |
| No log | 8.0 | 456 | 0.0001 | 1.0 |
| 0.0465 | 9.0 | 513 | 0.0001 | 1.0 |
| 0.0465 | 10.0 | 570 | 0.0001 | 1.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
Niggendar/InLightMix | Niggendar | 2024-07-17T08:59:09Z | 95 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-07-17T08:50:37Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 𧨠diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cadene/2024_07_16_diffusion_koch_pick_place_lego_simple_v2_latency_040000 | cadene | 2024-07-17T08:51:24Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
]
| null | 2024-07-17T08:51:19Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
naserahmadi/adapt-fa-t5-small | naserahmadi | 2024-07-17T08:44:17Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"fa",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-10-04T06:58:25Z | ---
license: mit
language:
- en
- fa
library_name: transformers
pipeline_tag: text2text-generation
---
This model has been created by keeping only Persian and English token embeddings in mT5-small model. The model was created by following instruction from this [link](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) |
TopGxn/llama3_Inst_8B_septupled_json_MA | TopGxn | 2024-07-17T08:43:38Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-07-17T07:16:09Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits