modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ppbuck/rb | ppbuck | 2024-05-14T23:59:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T23:57:54Z | ---
license: apache-2.0
---
|
superlazycoder/test_trainer | superlazycoder | 2024-05-14T23:57:40Z | 182 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T23:57:36Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5744
- Accuracy: 0.28
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 1.6041 | 0.25 |
| No log | 2.0 | 26 | 1.5780 | 0.29 |
| No log | 3.0 | 39 | 1.5744 | 0.28 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
alparslanahmed/phi3-tr-lora_model | alparslanahmed | 2024-05-14T23:53:04Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T21:58:39Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** alparslanahmed
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Litzy619/G0514HMA23H | Litzy619 | 2024-05-14T23:45:13Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T22:57:35Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0514HMA23H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0514HMA23H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -17.8940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8474 | 0.09 | 10 | -0.1051 |
| -1.133 | 0.18 | 20 | -2.6746 |
| -4.0726 | 0.27 | 30 | -5.9791 |
| -7.4096 | 0.36 | 40 | -9.2948 |
| -10.5075 | 0.45 | 50 | -12.1604 |
| -13.1872 | 0.54 | 60 | -14.5712 |
| -15.3198 | 0.63 | 70 | -16.1577 |
| -16.5601 | 0.73 | 80 | -17.0062 |
| -17.1749 | 0.82 | 90 | -17.3669 |
| -17.4459 | 0.91 | 100 | -17.5280 |
| -17.5636 | 1.0 | 110 | -17.6099 |
| -17.6344 | 1.09 | 120 | -17.6593 |
| -17.6708 | 1.18 | 130 | -17.6865 |
| -17.6958 | 1.27 | 140 | -17.7099 |
| -17.7175 | 1.36 | 150 | -17.7283 |
| -17.7369 | 1.45 | 160 | -17.7437 |
| -17.7549 | 1.54 | 170 | -17.7646 |
| -17.7752 | 1.63 | 180 | -17.7824 |
| -17.785 | 1.72 | 190 | -17.7920 |
| -17.8012 | 1.81 | 200 | -17.8080 |
| -17.8109 | 1.9 | 210 | -17.8184 |
| -17.8264 | 1.99 | 220 | -17.8386 |
| -17.85 | 2.08 | 230 | -17.8633 |
| -17.8652 | 2.18 | 240 | -17.8736 |
| -17.8735 | 2.27 | 250 | -17.8818 |
| -17.8791 | 2.36 | 260 | -17.8860 |
| -17.8821 | 2.45 | 270 | -17.8882 |
| -17.8883 | 2.54 | 280 | -17.8912 |
| -17.891 | 2.63 | 290 | -17.8924 |
| -17.8909 | 2.72 | 300 | -17.8933 |
| -17.8886 | 2.81 | 310 | -17.8938 |
| -17.8926 | 2.9 | 320 | -17.8939 |
| -17.8924 | 2.99 | 330 | -17.8940 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
Litzy619/G0514HMA24H | Litzy619 | 2024-05-14T23:45:05Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T22:57:35Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0514HMA24H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0514HMA24H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -17.9022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8009 | 0.09 | 10 | -0.2103 |
| -1.0796 | 0.18 | 20 | -2.5087 |
| -3.9095 | 0.27 | 30 | -5.8273 |
| -7.2606 | 0.36 | 40 | -9.1358 |
| -10.3742 | 0.45 | 50 | -12.0428 |
| -13.0694 | 0.54 | 60 | -14.4715 |
| -15.2349 | 0.63 | 70 | -16.0930 |
| -16.5217 | 0.73 | 80 | -16.9969 |
| -17.1884 | 0.82 | 90 | -17.3938 |
| -17.4707 | 0.91 | 100 | -17.5554 |
| -17.5868 | 1.0 | 110 | -17.6315 |
| -17.6564 | 1.09 | 120 | -17.6735 |
| -17.688 | 1.18 | 130 | -17.7003 |
| -17.709 | 1.27 | 140 | -17.7200 |
| -17.7262 | 1.36 | 150 | -17.7362 |
| -17.7401 | 1.45 | 160 | -17.7476 |
| -17.7557 | 1.54 | 170 | -17.7664 |
| -17.7777 | 1.63 | 180 | -17.7896 |
| -17.7948 | 1.72 | 190 | -17.8078 |
| -17.8232 | 1.81 | 200 | -17.8337 |
| -17.8393 | 1.9 | 210 | -17.8518 |
| -17.8561 | 1.99 | 220 | -17.8679 |
| -17.8673 | 2.08 | 230 | -17.8730 |
| -17.8748 | 2.18 | 240 | -17.8887 |
| -17.8874 | 2.27 | 250 | -17.8931 |
| -17.8901 | 2.36 | 260 | -17.8972 |
| -17.8918 | 2.45 | 270 | -17.8974 |
| -17.8952 | 2.54 | 280 | -17.9002 |
| -17.898 | 2.63 | 290 | -17.9012 |
| -17.8994 | 2.72 | 300 | -17.9019 |
| -17.8999 | 2.81 | 310 | -17.9020 |
| -17.9012 | 2.9 | 320 | -17.9022 |
| -17.8998 | 2.99 | 330 | -17.9022 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
allenai/specter2_aug2023refresh_adhoc_query | allenai | 2024-05-14T23:39:46Z | 150 | 1 | adapter-transformers | [
"adapter-transformers",
"bert",
"dataset:allenai/scirepeval",
"region:us"
] | null | 2023-08-14T19:28:11Z | ---
tags:
- bert
- adapter-transformers
datasets:
- allenai/scirepeval
---
## SPECTER2
<!-- Provide a quick summary of what the model is/does. -->
SPECTER2 is a family of models that succeeds [SPECTER](https://huggingface.co/allenai/specter) and is capable of generating task specific embeddings for scientific tasks when paired with [adapters](https://huggingface.co/models?search=allenai/specter-2_).
Given the combination of title and abstract of a scientific paper or a short texual query, the model can be used to generate effective embeddings to be used in downstream applications.
**Note:For general embedding purposes, please use [allenai/specter2](https://huggingface.co/allenai/specter2).**
**To get the best performance on a downstream task type please load the associated adapter () with the base model as in the example below.**
**Dec 2023 Update:**
Model usage updated to be compatible with latest versions of transformers and adapters (newly released update to adapter-transformers) libraries.
**\*\*\*\*\*\*Update\*\*\*\*\*\***
This update introduces a new set of SPECTER 2.0 models with the base transformer encoder pre-trained on an extended citation dataset containing more recent papers.
For benchmarking purposes please use the existing SPECTER 2.0 models w/o the **aug2023refresh** suffix viz. [allenai/specter2_base](https://huggingface.co/allenai/specter2_base).
# Adapter `allenai/specter2_aug2023refresh_adhoc_query` for `allenai/specter2_aug2023refresh_base`
An [adapter](https://adapterhub.ml) for the `None` model that was trained on the [allenai/scirepeval](https://huggingface.co/datasets/allenai/scirepeval/) dataset.
This adapter was created for usage with the **[adapters](https://github.com/adapter-hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
_Note: adapters is built as an add-on to transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml)_
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/specter2_aug2023refresh_base")
adapter_name = model.load_adapter("allenai/specter2_aug2023refresh_adhoc_query", source="hf", set_active=True)
```
# Model Details
## Model Description
SPECTER 2.0 has been trained on over 6M triplets of scientific paper citations, which are available [here](https://huggingface.co/datasets/allenai/scirepeval/viewer/cite_prediction_new/evaluation).
Post that it is trained with additionally attached task format specific adapter modules on all the [SciRepEval](https://huggingface.co/datasets/allenai/scirepeval) training tasks.
Task Formats trained on:
- Classification
- Regression
- Proximity
- Adhoc Search
**This is the adhoc search query specific adapter. For tasks where papers have to retrieved for a short textual query, use this adapter to encode the query and [allenai/specter2_proximity](https://huggingface.co/allenai/specter2_proximity) to encode the candidates.**
It builds on the work done in [SciRepEval: A Multi-Format Benchmark for Scientific Document Representations](https://api.semanticscholar.org/CorpusID:254018137) and we evaluate the trained model on this benchmark as well.
- **Developed by:** Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman
- **Shared by :** Allen AI
- **Model type:** bert-base-uncased + adapters
- **License:** Apache 2.0
- **Finetuned from model:** [allenai/scibert](https://huggingface.co/allenai/scibert_scivocab_uncased).
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/allenai/SPECTER2_0](https://github.com/allenai/SPECTER2_0)
- **Paper:** [https://api.semanticscholar.org/CorpusID:254018137](https://api.semanticscholar.org/CorpusID:254018137)
- **Demo:** [Usage](https://github.com/allenai/SPECTER2_0/blob/main/README.md)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
|Model|Name and HF link|Description|
|--|--|--|
|Proximity*|[allenai/specter2_aug2023refresh](https://huggingface.co/allenai/specter2_aug2023refresh)|Encode papers as queries and candidates eg. Link Prediction, Nearest Neighbor Search|
|Adhoc Query|[allenai/specter2_aug2023refresh_adhoc_query](https://huggingface.co/allenai/specter2_aug2023refresh_adhoc_query)|Encode short raw text queries for search tasks. (Candidate papers can be encoded with the proximity adapter)|
|Classification|[allenai/specter2_aug2023refresh_classification](https://huggingface.co/allenai/specter2_aug2023refresh_classification)|Encode papers to feed into linear classifiers as features|
|Regression|[allenai/specter2_aug2023refresh_regression](https://huggingface.co/allenai/specter2_aug2023refresh_regression)|Encode papers to feed into linear regressors as features|
*Proximity model should suffice for downstream task types not mentioned above
```python
from transformers import AutoTokenizer
from adapters import AutoAdapterModel
from sklearn.metrics.pairwise import euclidean_distances
def embed_input(text_batch: List[str]):
# preprocess the input
inputs = self.tokenizer(text_batch, padding=True, truncation=True,
return_tensors="pt", return_token_type_ids=False, max_length=512)
output = model(**inputs)
# take the first token in the batch as the embedding
embeddings = output.last_hidden_state[:, 0, :]
return embeddings
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('allenai/specter2_aug2023refresh_base_base')
#load base model
model = AutoAdapterModel.from_pretrained('allenai/specter2_aug2023refresh_base_aug2023refresh_base_base')
#load the query adapter, provide an identifier for the adapter in load_as argument and activate it
model.load_adapter("allenai/specter2_aug2023refresh_base_adhoc_query", source="hf", load_as="specter2_adhoc_query", set_active=True)
query = ["Bidirectional transformers"]
query_embedding = embed_input(query)
#load the proximity adapter, provide an identifier for the adapter in load_as argument and activate it
model.load_adapter("allenai/specter2_aug2023refresh_base", source="hf", load_as="specter2_proximity", set_active=True)
papers = [{'title': 'BERT', 'abstract': 'We introduce a new language representation model called BERT'},
{'title': 'Attention is all you need', 'abstract': ' The dominant sequence transduction models are based on complex recurrent or convolutional neural networks'}]
# concatenate title and abstract
text_papers_batch = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
paper_embeddings = embed_input(text_papers_batch)
#Calculate L2 distance between query and papers
l2_distance = euclidean_distances(papers, query).flatten()
```
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
For evaluation and downstream usage, please refer to [https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md](https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md).
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The base model is trained on citation links between papers and the adapters are trained on 8 large scale tasks across the four formats.
All the data is a part of SciRepEval benchmark and is available [here](https://huggingface.co/datasets/allenai/scirepeval).
The citation link are triplets in the form
```json
{"query": {"title": ..., "abstract": ...}, "pos": {"title": ..., "abstract": ...}, "neg": {"title": ..., "abstract": ...}}
```
consisting of a query paper, a positive citation and a negative which can be from the same/different field of study as the query or citation of a citation.
## Training Procedure
Please refer to the [SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677).
### Training Hyperparameters
The model is trained in two stages using [SciRepEval](https://github.com/allenai/scirepeval/blob/main/training/TRAINING.md):
- Base Model: First a base model is trained on the above citation triplets.
``` batch size = 1024, max input length = 512, learning rate = 2e-5, epochs = 2 warmup steps = 10% fp16```
- Adapters: Thereafter, task format specific adapters are trained on the SciRepEval training tasks, where 600K triplets are sampled from above and added to the training data as well.
``` batch size = 256, max input length = 512, learning rate = 1e-4, epochs = 6 warmup = 1000 steps fp16```
# Evaluation
We evaluate the model on [SciRepEval](https://github.com/allenai/scirepeval), a large scale eval benchmark for scientific embedding tasks which which has [SciDocs] as a subset.
We also evaluate and establish a new SoTA on [MDCR](https://github.com/zoranmedic/mdcr), a large scale citation recommendation benchmark.
|Model|SciRepEval In-Train|SciRepEval Out-of-Train|SciRepEval Avg|MDCR(MAP, Recall@5)|
|--|--|--|--|--|
|[BM-25](https://api.semanticscholar.org/CorpusID:252199740)|n/a|n/a|n/a|(33.7, 28.5)|
|[SPECTER](https://huggingface.co/allenai/specter)|54.7|57.4|68.0|(30.6, 25.5)|
|[SciNCL](https://huggingface.co/malteos/scincl)|55.6|57.8|69.0|(32.6, 27.3)|
|[SciRepEval-Adapters](https://huggingface.co/models?search=scirepeval)|61.9|59.0|70.9|(35.3, 29.6)|
|[SPECTER 2.0-Adapters](https://huggingface.co/models?search=allenai/specter-2)|**62.3**|**59.2**|**71.2**|**(38.4, 33.0)**|
Please cite the following works if you end up using SPECTER 2.0:
[SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677):
```bibtex
@inproceedings{specter2020cohan,
title={{SPECTER: Document-level Representation Learning using Citation-informed Transformers}},
author={Arman Cohan and Sergey Feldman and Iz Beltagy and Doug Downey and Daniel S. Weld},
booktitle={ACL},
year={2020}
}
```
[SciRepEval paper](https://api.semanticscholar.org/CorpusID:254018137)
```bibtex
@inproceedings{Singh2022SciRepEvalAM,
title={SciRepEval: A Multi-Format Benchmark for Scientific Document Representations},
author={Amanpreet Singh and Mike D'Arcy and Arman Cohan and Doug Downey and Sergey Feldman},
booktitle={Conference on Empirical Methods in Natural Language Processing},
year={2022},
url={https://api.semanticscholar.org/CorpusID:254018137}
}
```
|
allenai/specter2_aug2023refresh_base | allenai | 2024-05-14T23:39:35Z | 341,878 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"dataset:allenai/scirepeval",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-08-14T19:16:06Z | ---
license: apache-2.0
datasets:
- allenai/scirepeval
---
## SPECTER2
<!-- Provide a quick summary of what the model is/does. -->
SPECTER2 is a family of models that succeeds [SPECTER](https://huggingface.co/allenai/specter) and is capable of generating task specific embeddings for scientific tasks when paired with [adapters](https://huggingface.co/models?search=allenai/specter-2_).
This is the base encoder to be used with relevant task specific adapters.
Given the combination of title and abstract of a scientific paper or a short texual query, the model can be used to generate effective embeddings to be used in downstream applications.
**Note:For general embedding purposes, please use [allenai/specter2](https://huggingface.co/allenai/specter2).**
**To get the best performance on a downstream task type please load the associated adapter () with the base model as in the example below.**
**Dec 2023 Update:**
Model usage updated to be compatible with latest versions of transformers and adapters (newly released update to adapter-transformers) libraries.
**\*\*\*\*\*\*Update\*\*\*\*\*\***
This update introduces a new set of SPECTER2 models with the base transformer encoder pre-trained on an extended citation dataset containing more recent papers.
For benchmarking purposes please use the existing SPECTER2 [models](https://huggingface.co/allenai/specter2) w/o the **aug2023refresh** suffix.
**Note:For general embedding purposes, please use [allenai/specter2](https://huggingface.co/allenai/specter2).**
**To get the best performance on a downstream task type please load the associated adapter with the base model as in the example below.**
# Model Details
## Model Description
SPECTER2 has been trained on over 6M triplets of scientific paper citations, which are available [here](https://huggingface.co/datasets/allenai/scirepeval/viewer/cite_prediction_new/evaluation).
Post that it is trained with additionally attached task format specific adapter modules on all the [SciRepEval](https://huggingface.co/datasets/allenai/scirepeval) training tasks.
Task Formats trained on:
- Classification
- Regression
- Proximity
- Adhoc Search
It builds on the work done in [SciRepEval: A Multi-Format Benchmark for Scientific Document Representations](https://api.semanticscholar.org/CorpusID:254018137) and we evaluate the trained model on this benchmark as well.
- **Developed by:** Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman
- **Shared by :** Allen AI
- **Model type:** bert-base-uncased + adapters
- **License:** Apache 2.0
- **Finetuned from model:** [allenai/scibert](https://huggingface.co/allenai/scibert_scivocab_uncased).
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/allenai/SPECTER2](https://github.com/allenai/SPECTER2)
- **Paper:** [https://api.semanticscholar.org/CorpusID:254018137](https://api.semanticscholar.org/CorpusID:254018137)
- **Demo:** [Usage](https://github.com/allenai/SPECTER2/blob/main/README.md)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
|Model|Name and HF link|Description|
|--|--|--|
|Proximity*|[allenai/specter2_aug2023refresh](https://huggingface.co/allenai/specter2_aug2023refresh)|Encode papers as queries and candidates eg. Link Prediction, Nearest Neighbor Search|
|Adhoc Query|[allenai/specter2_aug2023refresh_adhoc_query](https://huggingface.co/allenai/specter2_aug2023refresh_adhoc_query)|Encode short raw text queries for search tasks. (Candidate papers can be encoded with the proximity adapter)|
|Classification|[allenai/specter2_aug2023refresh_classification](https://huggingface.co/allenai/specter2_aug2023refresh_classification)|Encode papers to feed into linear classifiers as features|
|Regression|[allenai/specter2_aug2023refresh_regression](https://huggingface.co/allenai/specter2_aug2023refresh_regression)|Encode papers to feed into linear regressors as features|
*Proximity model should suffice for downstream task types not mentioned above
```python
from transformers import AutoTokenizer
from adapters import AutoAdapterModel
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('allenai/specter2_aug2023refresh_base')
#load base model
model = AutoAdapterModel.from_pretrained('allenai/specter2_aug2023refresh_base')
#load the adapter(s) as per the required task, provide an identifier for the adapter in load_as argument and activate it
model.load_adapter("allenai/specter2_aug2023refresh", source="hf", load_as="proximity", set_active=True)
#other possibilities: allenai/specter2_aug2023refresh_<classification|regression|adhoc_query>
papers = [{'title': 'BERT', 'abstract': 'We introduce a new language representation model called BERT'},
{'title': 'Attention is all you need', 'abstract': ' The dominant sequence transduction models are based on complex recurrent or convolutional neural networks'}]
# concatenate title and abstract
text_batch = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
# preprocess the input
inputs = self.tokenizer(text_batch, padding=True, truncation=True,
return_tensors="pt", return_token_type_ids=False, max_length=512)
output = model(**inputs)
# take the first token in the batch as the embedding
embeddings = output.last_hidden_state[:, 0, :]
```
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
For evaluation and downstream usage, please refer to [https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md](https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md).
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The base model is trained on citation links between papers and the adapters are trained on 8 large scale tasks across the four formats.
All the data is a part of SciRepEval benchmark and is available [here](https://huggingface.co/datasets/allenai/scirepeval).
The citation link are triplets in the form
```json
{"query": {"title": ..., "abstract": ...}, "pos": {"title": ..., "abstract": ...}, "neg": {"title": ..., "abstract": ...}}
```
consisting of a query paper, a positive citation and a negative which can be from the same/different field of study as the query or citation of a citation.
## Training Procedure
Please refer to the [SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677).
### Training Hyperparameters
The model is trained in two stages using [SciRepEval](https://github.com/allenai/scirepeval/blob/main/training/TRAINING.md):
- Base Model: First a base model is trained on the above citation triplets.
``` batch size = 1024, max input length = 512, learning rate = 2e-5, epochs = 2 warmup steps = 10% fp16```
- Adapters: Thereafter, task format specific adapters are trained on the SciRepEval training tasks, where 600K triplets are sampled from above and added to the training data as well.
``` batch size = 256, max input length = 512, learning rate = 1e-4, epochs = 6 warmup = 1000 steps fp16```
# Evaluation
We evaluate the model on [SciRepEval](https://github.com/allenai/scirepeval), a large scale eval benchmark for scientific embedding tasks which which has [SciDocs] as a subset.
We also evaluate and establish a new SoTA on [MDCR](https://github.com/zoranmedic/mdcr), a large scale citation recommendation benchmark.
|Model|SciRepEval In-Train|SciRepEval Out-of-Train|SciRepEval Avg|MDCR(MAP, Recall@5)|
|--|--|--|--|--|
|[BM-25](https://api.semanticscholar.org/CorpusID:252199740)|n/a|n/a|n/a|(33.7, 28.5)|
|[SPECTER](https://huggingface.co/allenai/specter)|54.7|57.4|68.0|(30.6, 25.5)|
|[SciNCL](https://huggingface.co/malteos/scincl)|55.6|57.8|69.0|(32.6, 27.3)|
|[SciRepEval-Adapters](https://huggingface.co/models?search=scirepeval)|61.9|59.0|70.9|(35.3, 29.6)|
|[SPECTER2-Adapters](https://huggingface.co/models?search=allenai/specter-2)|**62.3**|**59.2**|**71.2**|**(38.4, 33.0)**|
Please cite the following works if you end up using SPECTER2:
[SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677):
```bibtex
@inproceedings{specter2020cohan,
title={{SPECTER: Document-level Representation Learning using Citation-informed Transformers}},
author={Arman Cohan and Sergey Feldman and Iz Beltagy and Doug Downey and Daniel S. Weld},
booktitle={ACL},
year={2020}
}
```
[SciRepEval paper](https://api.semanticscholar.org/CorpusID:254018137)
```bibtex
@inproceedings{Singh2022SciRepEvalAM,
title={SciRepEval: A Multi-Format Benchmark for Scientific Document Representations},
author={Amanpreet Singh and Mike D'Arcy and Arman Cohan and Doug Downey and Sergey Feldman},
booktitle={Conference on Empirical Methods in Natural Language Processing},
year={2022},
url={https://api.semanticscholar.org/CorpusID:254018137}
}
```
|
Shadowmachete/asr_training | Shadowmachete | 2024-05-14T23:38:55Z | 116 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-14T11:47:50Z | ---
tags:
- generated_from_trainer
model-index:
- name: asr_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asr_training
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
|
mradermacher/Med-Yi-1.5-9B-GGUF | mradermacher | 2024-05-14T23:36:01Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"dataset:Kabatubare/medical-alpaca",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T21:02:15Z | ---
base_model: NotAiLOL/Med-Yi-1.5-9B
datasets:
- Kabatubare/medical-alpaca
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Med-Yi-1.5-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Med-Yi-1.5-9B-GGUF/resolve/main/Med-Yi-1.5-9B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MarkBW/lola-sawyer-xl | MarkBW | 2024-05-14T23:34:43Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-05-14T23:33:49Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/VwehRUvM.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: lola-sawyer
---
# lola-sawyer-xl
<Gallery />
## Trigger words
You should use `lola-sawyer` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MarkBW/lola-sawyer-xl/tree/main) them in the Files & versions tab.
|
DucPhanBa/llama2-hf-finetuned-qlora | DucPhanBa | 2024-05-14T23:30:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T23:30:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JoshuaKelleyDs/quickdraw-MobileVITV2-2.0-Finetune | JoshuaKelleyDs | 2024-05-14T23:29:15Z | 143 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"mobilevitv2",
"image-classification",
"generated_from_trainer",
"base_model:shehan97/mobilevitv2-2.0-imagenet1k-256",
"base_model:quantized:shehan97/mobilevitv2-2.0-imagenet1k-256",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-14T23:25:01Z | ---
base_model: shehan97/mobilevitv2-2.0-imagenet1k-256
tags:
- generated_from_trainer
model-index:
- name: quickdraw-MobileVITV2-1.0-Pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# quickdraw-MobileVITV2-1.0-Pretrained
This model is a fine-tuned version of [shehan97/mobilevitv2-2.0-imagenet1k-256](https://huggingface.co/shehan97/mobilevitv2-2.0-imagenet1k-256) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9671
- eval_accuracy: 0.7622
- eval_runtime: 16.2585
- eval_samples_per_second: 15376.569
- eval_steps_per_second: 30.077
- epoch: 6.2626
- step: 55048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bdpoff/llama-3-8B-json-editor | bdpoff | 2024-05-14T23:20:09Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"dataset:bdpoff/webllm-json-editor",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T22:33:44Z | ---
license: llama3
tags:
- text-generation-inference
- unsloth
- trl
- sft
datasets:
- bdpoff/webllm-json-editor
---
|
Litzy619/G0514HMA13H | Litzy619 | 2024-05-14T23:18:25Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T22:09:40Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0514HMA13H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0514HMA13H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -17.8971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9162 | 0.09 | 10 | 0.0830 |
| -0.7664 | 0.18 | 20 | -2.0466 |
| -3.27 | 0.27 | 30 | -4.9484 |
| -6.242 | 0.36 | 40 | -7.9963 |
| -9.1799 | 0.45 | 50 | -10.7742 |
| -11.7646 | 0.54 | 60 | -13.2319 |
| -14.1063 | 0.63 | 70 | -15.1473 |
| -15.7143 | 0.73 | 80 | -16.3945 |
| -16.7127 | 0.82 | 90 | -17.0741 |
| -17.2299 | 0.91 | 100 | -17.4041 |
| -17.4683 | 1.0 | 110 | -17.5479 |
| -17.5857 | 1.09 | 120 | -17.6235 |
| -17.6418 | 1.18 | 130 | -17.6631 |
| -17.6771 | 1.27 | 140 | -17.6957 |
| -17.703 | 1.36 | 150 | -17.7160 |
| -17.7218 | 1.45 | 160 | -17.7272 |
| -17.7369 | 1.54 | 170 | -17.7463 |
| -17.7561 | 1.63 | 180 | -17.7646 |
| -17.7704 | 1.72 | 190 | -17.7808 |
| -17.7897 | 1.81 | 200 | -17.7972 |
| -17.8056 | 1.9 | 210 | -17.8223 |
| -17.8326 | 1.99 | 220 | -17.8447 |
| -17.8508 | 2.08 | 230 | -17.8658 |
| -17.8699 | 2.18 | 240 | -17.8773 |
| -17.8777 | 2.27 | 250 | -17.8862 |
| -17.8827 | 2.36 | 260 | -17.8912 |
| -17.889 | 2.45 | 270 | -17.8936 |
| -17.8917 | 2.54 | 280 | -17.8948 |
| -17.8936 | 2.63 | 290 | -17.8942 |
| -17.8949 | 2.72 | 300 | -17.8967 |
| -17.8934 | 2.81 | 310 | -17.8970 |
| -17.8964 | 2.9 | 320 | -17.8971 |
| -17.8956 | 2.99 | 330 | -17.8971 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
Danieljacobsen/Helsinki-DA-SV-v7 | Danieljacobsen | 2024-05-14T23:16:55Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T20:10:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nadahass/MREG-Orig-LORA8-13b-4batches-300epochs-q8-3e-4 | Nadahass | 2024-05-14T23:14:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T23:14:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aliosmanbey/llama3-8b-oig-unsloth-merged | aliosmanbey | 2024-05-14T23:11:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T23:04:58Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** aliosmanbey
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LarryAIDraw/mio2 | LarryAIDraw | 2024-05-14T22:49:55Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-14T22:48:39Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/49843/takamiya-miodate-a-live |
VinyVan/my_model_merged2 | VinyVan | 2024-05-14T22:48:01Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T22:41:01Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** VinyVan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WlappaAI/Mistral-7B-wikipedia_ru_pruned-0.1_merged | WlappaAI | 2024-05-14T22:46:11Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"ru",
"dataset:danasone/wikipedia_ru",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T22:07:15Z | ---
license: apache-2.0
pipeline_tag: text-generation
language:
- ru
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- danasone/wikipedia_ru
model-index:
- name: Mistral-7B-wikipedia_ru_pruned-0.1_merged
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ./datasets/ruwiki-pruned
type: completion
field: text
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
output_dir: ./models/output
adapter: qlora
lora_model_dir:
sequence_len: 1024
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 11
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps:
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch:
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# Mistral-7B-wikipedia_ru_pruned-0.1_merged
This model is a merge of [WlappaAI/Mistral-7B-v0.1-wikipedia_ru_pruned-0.1](https://huggingface.co/WlappaAI/Mistral-7B-v0.1-wikipedia_ru_pruned-0.1) together with [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). It's trained on modified [danasone/wikipedia_ru](https://huggingface.co/datasets/danasone/wikipedia_ru) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1876
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5643 | 0.0 | 0 | |
| 1.012 | 1.0 | 1100 | 1.1876 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0 |
M3W051/JapaneseConvo-1.0 | M3W051 | 2024-05-14T22:41:59Z | 0 | 0 | peft | [
"peft",
"pytorch",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-bnb-4bit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-14T21:17:50Z | ---
library_name: peft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
dtorber/BioNLP-2024-dtorber-BioBART-baseline-eLife | dtorber | 2024-05-14T22:40:57Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:GanjinZero/biobart-v2-large",
"base_model:finetune:GanjinZero/biobart-v2-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-05-14T18:01:34Z | ---
license: apache-2.0
base_model: GanjinZero/biobart-v2-large
tags:
- summarization
- generated_from_trainer
model-index:
- name: BioNLP-2024-dtorber-BioBART-baseline-eLife
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioNLP-2024-dtorber-BioBART-baseline-eLife
This model is a fine-tuned version of [GanjinZero/biobart-v2-large](https://huggingface.co/GanjinZero/biobart-v2-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
Litzy619/G0514HMA1H | Litzy619 | 2024-05-14T22:37:35Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T21:11:30Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0514HMA1H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0514HMA1H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -17.7428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9766 | 0.09 | 10 | 0.1938 |
| -0.7095 | 0.18 | 20 | -2.1543 |
| -3.5129 | 0.27 | 30 | -5.3325 |
| -6.6654 | 0.36 | 40 | -8.4889 |
| -9.7041 | 0.45 | 50 | -11.4077 |
| -12.4428 | 0.54 | 60 | -13.8134 |
| -14.5638 | 0.63 | 70 | -15.4354 |
| -15.9411 | 0.73 | 80 | -16.4692 |
| -16.7235 | 0.82 | 90 | -17.0250 |
| -17.1404 | 0.91 | 100 | -17.2723 |
| -17.3354 | 1.0 | 110 | -17.4128 |
| -17.4522 | 1.09 | 120 | -17.4974 |
| -17.5272 | 1.18 | 130 | -17.5592 |
| -17.5718 | 1.27 | 140 | -17.6055 |
| -17.6154 | 1.36 | 150 | -17.6336 |
| -17.6416 | 1.45 | 160 | -17.6542 |
| -17.656 | 1.54 | 170 | -17.6683 |
| -17.6769 | 1.63 | 180 | -17.6832 |
| -17.6876 | 1.72 | 190 | -17.6950 |
| -17.6975 | 1.81 | 200 | -17.7028 |
| -17.7043 | 1.9 | 210 | -17.7114 |
| -17.7137 | 1.99 | 220 | -17.7188 |
| -17.7217 | 2.08 | 230 | -17.7245 |
| -17.7306 | 2.18 | 240 | -17.7293 |
| -17.7293 | 2.27 | 250 | -17.7330 |
| -17.7348 | 2.36 | 260 | -17.7365 |
| -17.741 | 2.45 | 270 | -17.7386 |
| -17.7415 | 2.54 | 280 | -17.7403 |
| -17.7442 | 2.63 | 290 | -17.7410 |
| -17.7456 | 2.72 | 300 | -17.7422 |
| -17.7426 | 2.81 | 310 | -17.7425 |
| -17.7471 | 2.9 | 320 | -17.7427 |
| -17.7473 | 2.99 | 330 | -17.7428 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
jonruida/Spotify | jonruida | 2024-05-14T22:34:54Z | 0 | 0 | null | [
"graph-ml",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | graph-ml | 2024-05-10T11:49:58Z | ---
license: apache-2.0
pipeline_tag: graph-ml
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
netcat420/MFANN3bv0.9-GGUF | netcat420 | 2024-05-14T22:25:40Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T22:06:27Z | ---
license: apache-2.0
---
|
sambanovasystems/SambaLingo-Arabic-Chat-70B | sambanovasystems | 2024-05-14T22:20:17Z | 2,797 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"ar",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"arxiv:2404.05829",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-15T20:06:06Z | ---
license: llama2
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
- HuggingFaceH4/cai-conversation-harmless
language:
- ar
- en
---
# SambaLingo-Arabic-Chat-70B
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Arabic-Chat-70B is a human aligned chat model trained in Arabic and English. It is trained using direct preference optimization on top the base model [SambaLingo-Arabic-Base-70B](https://huggingface.co/sambanovasystems/SambaLingo-Arabic-Base-70B). The base model adapts [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) to Arabic by training on 28 billion tokens from the Arabic split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Arabic, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Arabic-Chat-70B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Arabic-Chat-70B", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Arabic-Chat-70B", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}</s>\n<|assistant|>\n
```
### Example Prompts and Generations
```
<|user|>
كيف تصف الالوان لشخص ولد أعمى</s>
<|assistant|>
ليس لدي القدرة على الرؤية، ولكن يمكنني تقديم وصف للألوان بناءً على ما تعلمته من البيانات.\n\nبالنسبة لشخص ولد أعمى، قد يكون من الصعب وصف الألوان لأنها لم تختبرها من قبل. ومع ذلك، يمكننا استخدام الاستعارات والمقارنات لمساعدة هذا الشخص على فهم الألوان. على سبيل المثال، يمكننا وصف اللون الأحمر بأنه دافئ وحيوي، مثل لهب النار أو ثمرة الفراولة الناضجة. يمكننا وصف اللون الأزرق بأنه هادئ ومهدئ، مثل السماء أو المحيط. يمكننا وصف اللون الأخضر بأنه منعش وطبيعي، مثل أوراق الشجر أو العشب. يمكننا وصف اللون الأصفر بأنه مشرق ومبهج، مثل الشمس أو زهرة عباد الشمس. يمكننا وصف اللون الأرجواني بأنه ملكي وفاخر، مثل لون الجواهر أو غروب الشمس. في النهاية، قد يكون من الصعب وصف الألوان لشخص ولد أعمى، ولكن باستخدام الاستعارات والمقارنات، يمكننا مساعدتهم على فهم الفروق الدقيقة في الألوان بشكل أفضل.
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
abc88767/22c59 | abc88767 | 2024-05-14T22:17:01Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T22:15:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tomaszki/llama-22-b | tomaszki | 2024-05-14T22:16:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T22:14:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jeiku/Puppy_Purpose_0.69-Q4_K_M-GGUF | jeiku | 2024-05-14T22:15:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ResplendentAI/Aura_Uncensored_l3_8B",
"base_model:quantized:ResplendentAI/Aura_Uncensored_l3_8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T22:15:31Z | ---
license: apache-2.0
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model:
- jeiku/OrthoPoppy
- ResplendentAI/NoWarning_Llama3
- jeiku/UnPoppy_8B
- ResplendentAI/Smarts_Llama3
- ResplendentAI/Aura_Uncensored_l3_8B
- ResplendentAI/Theory_of_Mind_Llama3
- jeiku/UnPoppy_8B
---
# jeiku/Puppy_Purpose_0.69-Q4_K_M-GGUF
This model was converted to GGUF format from [`ChaoticNeutrals/Puppy_Purpose_0.69`](https://huggingface.co/ChaoticNeutrals/Puppy_Purpose_0.69) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ChaoticNeutrals/Puppy_Purpose_0.69) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo jeiku/Puppy_Purpose_0.69-Q4_K_M-GGUF --model puppy_purpose_0.69.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo jeiku/Puppy_Purpose_0.69-Q4_K_M-GGUF --model puppy_purpose_0.69.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m puppy_purpose_0.69.Q4_K_M.gguf -n 128
```
|
tomaszki/llama-22-a | tomaszki | 2024-05-14T22:13:53Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T22:10:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ChaoticNeutrals/Puppy_Purpose_0.69 | ChaoticNeutrals | 2024-05-14T22:13:44Z | 71 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:ResplendentAI/Aura_Uncensored_l3_8B",
"base_model:finetune:ResplendentAI/Aura_Uncensored_l3_8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T23:14:20Z | ---
base_model:
- jeiku/OrthoPoppy
- ResplendentAI/NoWarning_Llama3
- jeiku/UnPoppy_8B
- ResplendentAI/Smarts_Llama3
- ResplendentAI/Aura_Uncensored_l3_8B
- ResplendentAI/Theory_of_Mind_Llama3
- jeiku/UnPoppy_8B
library_name: transformers
license: apache-2.0
---
# Puppy Purpose 0.69

Say hello to your puppy princess, she is pawsitively pleased to play with you!
A combination of model merges and lora merges using my signature datasets. I'm not too sure how this one will turn out, I made it for my own usage, but it should serve others well too. This model is compatible with our Chaotic Neutrals Llama3 mmproj files. Good luck and have fun! |
DuckyBlender/bawialniagpt-GGUF | DuckyBlender | 2024-05-14T22:13:34Z | 1 | 0 | null | [
"gguf",
"nonsense",
"low quality",
"pl",
"dataset:DuckyBlender/bawialnia-dataset",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T21:50:33Z | ---
license: gpl-3.0
datasets:
- DuckyBlender/bawialnia-dataset
language:
- pl
tags:
- nonsense
- gguf
- low quality
---
### GGUF version of bawialniaGPT. More info in the model card here: https://huggingface.co/DuckyBlender/bawialniagpt |
abc88767/2c59 | abc88767 | 2024-05-14T22:13:33Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T22:11:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/G0514HMA8H | Litzy619 | 2024-05-14T22:08:54Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T20:58:11Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0514HMA8H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0514HMA8H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -17.7793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9907 | 0.09 | 10 | 0.2549 |
| -0.3452 | 0.18 | 20 | -1.4911 |
| -2.6745 | 0.27 | 30 | -4.3029 |
| -5.5459 | 0.36 | 40 | -7.2444 |
| -8.4583 | 0.45 | 50 | -10.1413 |
| -11.1772 | 0.54 | 60 | -12.6543 |
| -13.5601 | 0.63 | 70 | -14.6708 |
| -15.3021 | 0.73 | 80 | -16.0377 |
| -16.4202 | 0.82 | 90 | -16.8732 |
| -17.0486 | 0.91 | 100 | -17.2545 |
| -17.3372 | 1.0 | 110 | -17.4412 |
| -17.4961 | 1.09 | 120 | -17.5444 |
| -17.5738 | 1.18 | 130 | -17.6050 |
| -17.6197 | 1.27 | 140 | -17.6459 |
| -17.6553 | 1.36 | 150 | -17.6713 |
| -17.6808 | 1.45 | 160 | -17.6919 |
| -17.6964 | 1.54 | 170 | -17.7067 |
| -17.7138 | 1.63 | 180 | -17.7237 |
| -17.7268 | 1.72 | 190 | -17.7349 |
| -17.7396 | 1.81 | 200 | -17.7434 |
| -17.7455 | 1.9 | 210 | -17.7509 |
| -17.7517 | 1.99 | 220 | -17.7574 |
| -17.7593 | 2.08 | 230 | -17.7623 |
| -17.7681 | 2.18 | 240 | -17.7664 |
| -17.7669 | 2.27 | 250 | -17.7696 |
| -17.7705 | 2.36 | 260 | -17.7729 |
| -17.7774 | 2.45 | 270 | -17.7750 |
| -17.7756 | 2.54 | 280 | -17.7766 |
| -17.7797 | 2.63 | 290 | -17.7777 |
| -17.7804 | 2.72 | 300 | -17.7787 |
| -17.7775 | 2.81 | 310 | -17.7792 |
| -17.7822 | 2.9 | 320 | -17.7793 |
| -17.7813 | 2.99 | 330 | -17.7793 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
Litzy619/G0514HMA9H | Litzy619 | 2024-05-14T22:07:53Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-14T20:58:11Z | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0514HMA9H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0514HMA9H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -17.8302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9517 | 0.09 | 10 | 0.1638 |
| -0.6528 | 0.18 | 20 | -1.9084 |
| -3.1267 | 0.27 | 30 | -4.7902 |
| -6.0596 | 0.36 | 40 | -7.8054 |
| -9.0118 | 0.45 | 50 | -10.6366 |
| -11.6312 | 0.54 | 60 | -13.0868 |
| -13.9766 | 0.63 | 70 | -15.0345 |
| -15.6124 | 0.73 | 80 | -16.3032 |
| -16.6378 | 0.82 | 90 | -17.0207 |
| -17.1714 | 0.91 | 100 | -17.3497 |
| -17.4216 | 1.0 | 110 | -17.5126 |
| -17.5543 | 1.09 | 120 | -17.5999 |
| -17.6217 | 1.18 | 130 | -17.6473 |
| -17.6609 | 1.27 | 140 | -17.6818 |
| -17.6899 | 1.36 | 150 | -17.7041 |
| -17.7101 | 1.45 | 160 | -17.7147 |
| -17.7215 | 1.54 | 170 | -17.7306 |
| -17.7352 | 1.63 | 180 | -17.7425 |
| -17.7484 | 1.72 | 190 | -17.7559 |
| -17.7629 | 1.81 | 200 | -17.7670 |
| -17.77 | 1.9 | 210 | -17.7756 |
| -17.7798 | 1.99 | 220 | -17.7847 |
| -17.7898 | 2.08 | 230 | -17.7911 |
| -17.799 | 2.18 | 240 | -17.7988 |
| -17.8001 | 2.27 | 250 | -17.8040 |
| -17.807 | 2.36 | 260 | -17.8101 |
| -17.8173 | 2.45 | 270 | -17.8156 |
| -17.8186 | 2.54 | 280 | -17.8207 |
| -17.8237 | 2.63 | 290 | -17.8248 |
| -17.8266 | 2.72 | 300 | -17.8279 |
| -17.8269 | 2.81 | 310 | -17.8294 |
| -17.8338 | 2.9 | 320 | -17.8301 |
| -17.831 | 2.99 | 330 | -17.8302 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
ekolasky/Llama3ForResultExtraction | ekolasky | 2024-05-14T22:07:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T16:50:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rvukasin/Reinforce-Pixelcopter-PLE-v0-local | rvukasin | 2024-05-14T22:06:28Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T13:30:32Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-local
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.80 +/- 13.08
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Leohimself/ppo-LunarLander-v2 | Leohimself | 2024-05-14T22:05:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T22:05:06Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.45 +/- 15.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/Math-Yi-1.5-9B-GGUF | mradermacher | 2024-05-14T22:03:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"dataset:microsoft/orca-math-word-problems-200k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T20:44:58Z | ---
base_model: NotAiLOL/Math-Yi-1.5-9B
datasets:
- microsoft/orca-math-word-problems-200k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Math-Yi-1.5-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Math-Yi-1.5-9B-GGUF/resolve/main/Math-Yi-1.5-9B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Meta-Llama-3-7B-29Layers-GGUF | mradermacher | 2024-05-14T21:59:33Z | 76 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama3",
"en",
"base_model:aloobun/Meta-Llama-3-7B-29Layers",
"base_model:quantized:aloobun/Meta-Llama-3-7B-29Layers",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T21:33:50Z | ---
base_model: aloobun/Meta-Llama-3-7B-29Layers
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- mergekit
- merge
- llama3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/aloobun/Meta-Llama-3-7B-29Layers
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q3_K_S.gguf) | Q3_K_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.IQ3_S.gguf) | IQ3_S | 3.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.IQ3_M.gguf) | IQ3_M | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-7B-29Layers-GGUF/resolve/main/Meta-Llama-3-7B-29Layers.f16.gguf) | f16 | 14.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CCRss/Yolo-on-custom-dataset | CCRss | 2024-05-14T21:58:54Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-05-14T21:06:52Z | ---
license: mit
---
# YOLO Human Detection Model
Эта модель обучена для обнаружения людей на изображениях. Использует архитектуру YOLO и была обучена на датасете [Human dataset](https://universe.roboflow.com/raman-hrynkevich/people_dataset/dataset/1).
## Использование
```python
from ultralytics import YOLO
model = YOLO('path/to/yolo-human-detection.pt')
results = model('path/to/your/image.jpg')
# Отобразить результаты
results.show()
|
llm-wizard/leagaleasy-llama-3-instruct-v3 | llm-wizard | 2024-05-14T21:58:15Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-14T21:57:08Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- generator
model-index:
- name: leagaleasy-llama-3-instruct-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leagaleasy-llama-3-instruct-v3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
tuhinatripathi/gemma2b-5kdata | tuhinatripathi | 2024-05-14T21:56:07Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"base_model:adapter:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T18:26:29Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/gemma-2b-it-bnb-4bit
model-index:
- name: gemma2b-5kdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2b-5kdata
This model is a fine-tuned version of [unsloth/gemma-2b-it-bnb-4bit](https://huggingface.co/unsloth/gemma-2b-it-bnb-4bit) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 135
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.036 | 1.0 | 1250 | 0.9939 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
rvukasin/ppo-SnowballTarget | rvukasin | 2024-05-14T21:55:52Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-05-14T21:55:48Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rvukasin/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sakren/deberta-emotion | sakren | 2024-05-14T21:54:04Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T21:23:31Z | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: deberta-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: F1
type: f1
value: 0.9352884200987154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-emotion
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1592
- F1: 0.9353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1784 | 1.0 | 250 | 0.1746 | 0.9325 |
| 0.1273 | 2.0 | 500 | 0.1672 | 0.9332 |
| 0.1008 | 3.0 | 750 | 0.1592 | 0.9353 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
DattaBS/llama2-best-strat-newalpha | DattaBS | 2024-05-14T21:53:42Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T05:40:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Norphel/dzoQAmodel | Norphel | 2024-05-14T21:50:28Z | 123 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:sangjeedondrub/tibetan-roberta-base",
"base_model:finetune:sangjeedondrub/tibetan-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-09T15:45:13Z | ---
license: mit
base_model: sangjeedondrub/tibetan-roberta-base
tags:
- generated_from_trainer
model-index:
- name: dzoQAmodel
results: []
widget:
- text: ངག་དབང་རྣམ་རྒྱལ་འདི་འབྲུག་རྒྱལ་ཁབ་ལུ་ནམ་འོང་ཡི
context: >-
ངག་དབང་རྣམ་རྒྱལ་འདི་འབྲུག་རྒྱལ་ཁབ་ནང་སྤྱི་ལོ་༡༦༡༦ ལུ་འོང་ནུག། ཁོ་གིས་འབྲུག་རྒྱལ་ཁབ་འདི་ཁྲིམས་ལུགས་གཅིག་འོག་ལུ་གཅིག་མཐུན་བཟོ་ནུག།ཁོ་གིས་འབྲུག་རྒྱལ་ཁབ་ལུ་མ་འོང་པའི་ཧེ་མ་ ལུང་ཕྱོགས་སོ་སོ་ནང་འགོ་ཁྲིདཔ་ལེ་ཤ་སྡོད་ནུག། ཁོ་ལུ་གུས་ཞབས་འབད་དགོ་མནོ་མི་འདི་འབད་ཡི།
- text: འབྲུག་ལུ་མི་རློབས་ག་དེམ་ཅིག་སྨོ
context: >-
འབྲུག་འདི་རྒྱ་ནག་དང་རྒྱ་གར་གྱི་བར་ན་ཨིན། འབྲུག་ལུ་མི་རློབས་༧༠༠༠༠༠ དེ་ཅིག་ཡོད། ལོ་ལྟར་གྱི་འོང་འབབ་འདི་ལྟ་བཤལ་དང་ཆུ་གློག་མེ་ཐོན་སྐྱེད་ལས་འབྱུང་ཨིན། རྒྱལ་ཁབ་གཞན་གྱི་མི་ལེ་ཤ་གིས་ འབྲུག་ནང་སྡོད་ནི་ལུ་དགའ།
- text: སྤྱི་ལོ་༡༩༧༤ ལུ་ལྟ་བཤལ་པ་ག་དེམ་ཅིག་གིས་འབྲུག་ཡུལ་བལྟ་སྐོར་འབད་ཡི་ག
context: >-
འབྲུག་རྒྱལ་ཁབ་ནང་ ལྟ་བཤལ་འདི་སྤྱི་ལོ་༡༩༧༤ ལུ་ འབྲུག་གཞུང་གིས་འོང་འབབ་ཡར་སེང་འབད་ནི་དང་ འབྲུག་གི་ལམ་སྲོལ་དང་སྲོལ་ལུགས་ཚུ་ཕྱི་རྒྱལ་ལུ་ཡར་རྒྱས་གཏང་ནི་གི་དཔའ་བཅམ་སྟེ་ འབྲུག་རྒྱལ་ཁབ་སོ་སོ་འདི་ལྟ་བཤལ་པ་ཚུ་ལུ་སྒོ་ཕྱེ་ཞིནམ་ལས་འགོ་བཙུགས་ནུག།སྤྱི་ལོ་༡༩༧༤ ལུ་ འབྲུག་རྒྱལ་ཁབ་ལུ་མི་ ༢༧༤ རྐྱངམ་གཅིག་ལྟ་སྐོར་འབད་ཡི་ དེ་འབདཝ་ད་ ལོ་ལེ་ཤ་ཅིག་གིས་ཨ་ནཱི་གྱངས་ཁ་འདི་ད་རེས་ཀྱི་ལོ་ཚུ་ནང་གནམ་མེད་ས་མེད་ཡར་སེང་སོང་ནུག།
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dzoQAmodel
This model is a fine-tuned version of [sangjeedondrub/tibetan-roberta-base](https://huggingface.co/sangjeedondrub/tibetan-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 16 | 5.5230 |
| No log | 2.0 | 32 | 5.3383 |
| No log | 3.0 | 48 | 5.2939 |
| No log | 4.0 | 64 | 5.2087 |
| No log | 5.0 | 80 | 5.1862 |
| No log | 6.0 | 96 | 5.2205 |
| No log | 7.0 | 112 | 5.2804 |
| No log | 8.0 | 128 | 5.3362 |
| No log | 9.0 | 144 | 5.4572 |
| No log | 10.0 | 160 | 5.4701 |
| No log | 11.0 | 176 | 5.5276 |
| No log | 12.0 | 192 | 5.5678 |
| No log | 13.0 | 208 | 5.6553 |
| No log | 14.0 | 224 | 5.6671 |
| No log | 15.0 | 240 | 5.6732 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mukel/Meta-Llama-3-8B-Instruct-GGUF | mukel | 2024-05-14T21:46:00Z | 80 | 3 | null | [
"gguf",
"java",
"llama",
"llama3",
"llama3.java",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T09:37:40Z | ---
license: llama3
tags:
- java
- llama
- llama3
- gguf
- llama3.java
---
# GGUF models for llama3.java
Pure .gguf `Q4_0` and `Q8_0` quantizations of Llama 3 8B instruct, ready to consume by [llama3.java](https://github.com/mukel/llama3.java).
In the wild, `Q8_0` quantizations are fine, but `Q4_0` quantizations are rarely pure e.g. the `output.weights` tensor is quantized with `Q6_K`, instead of `Q4_0`.
A pure `Q4_0` quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows:
```
./quantize --pure ./Meta-Llama-3-8B-Instruct-F32.gguf ./Meta-Llama-3-8B-Instruct-Q4_0.gguf Q4_0
```
# Meta-Llama-3-8B-Instruct-GGUF
- This is GGUF quantized version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) created using llama.cpp
- **Re-uploaded with new end token**
## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
asiansoul/Joah-Remix-Llama-3-KoEn-8B-Reborn | asiansoul | 2024-05-14T21:38:55Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Locutusque/llama-3-neural-chat-v2.2-8B",
"base_model:merge:Locutusque/llama-3-neural-chat-v2.2-8B",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:merge:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:merge:aaditya/Llama3-OpenBioLLM-8B",
"base_model:abacusai/Llama-3-Smaug-8B",
"base_model:merge:abacusai/Llama-3-Smaug-8B",
"base_model:asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn",
"base_model:merge:asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn",
"base_model:beomi/Llama-3-KoEn-8B-Instruct-preview",
"base_model:merge:beomi/Llama-3-KoEn-8B-Instruct-preview",
"base_model:cognitivecomputations/dolphin-2.9.1-llama-3-8b",
"base_model:merge:cognitivecomputations/dolphin-2.9.1-llama-3-8b",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:merge:defog/llama-3-sqlcoder-8b",
"base_model:rombodawg/Llama-3-8B-Base-Coder-v3.5-10k",
"base_model:merge:rombodawg/Llama-3-8B-Base-Coder-v3.5-10k",
"base_model:rombodawg/Llama-3-8B-Instruct-Coder",
"base_model:merge:rombodawg/Llama-3-8B-Instruct-Coder",
"base_model:saltlux/Ko-Llama3-Luxia-8B",
"base_model:merge:saltlux/Ko-Llama3-Luxia-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T01:15:21Z | ---
base_model:
- beomi/Llama-3-KoEn-8B-Instruct-preview
- asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn
- NousResearch/Hermes-2-Pro-Llama-3-8B
- saltlux/Ko-Llama3-Luxia-8B
- defog/llama-3-sqlcoder-8b
- Locutusque/llama-3-neural-chat-v2.2-8B
- rombodawg/Llama-3-8B-Instruct-Coder
- NousResearch/Meta-Llama-3-8B-Instruct
- aaditya/Llama3-OpenBioLLM-8B
- rombodawg/Llama-3-8B-Base-Coder-v3.5-10k
- cognitivecomputations/dolphin-2.9.1-llama-3-8b
- abacusai/Llama-3-Smaug-8B
- NousResearch/Meta-Llama-3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# Joah-Remix-Llama-3-KoEn-8B-Reborn
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [beomi/Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
* [asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn](https://huggingface.co/asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn)
* [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
* [saltlux/Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B)
* [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b)
* [Locutusque/llama-3-neural-chat-v2.2-8B](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8B)
* [rombodawg/Llama-3-8B-Instruct-Coder](https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
* [rombodawg/Llama-3-8B-Base-Coder-v3.5-10k](https://huggingface.co/rombodawg/Llama-3-8B-Base-Coder-v3.5-10k)
* [cognitivecomputations/dolphin-2.9.1-llama-3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-8b)
* [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# Base model providing a general foundation without specific parameters
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
density: 0.65
weight: 0.25
- model: asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn
parameters:
density: 0.6
weight: 0.2
- model: beomi/Llama-3-KoEn-8B-Instruct-preview
parameters:
density: 0.55
weight: 0.125
- model: saltlux/Ko-Llama3-Luxia-8B
parameters:
density: 0.55
weight: 0.125
- model: cognitivecomputations/dolphin-2.9.1-llama-3-8b
parameters:
density: 0.55
weight: 0.05
- model: aaditya/Llama3-OpenBioLLM-8B
parameters:
density: 0.55
weight: 0.05
- model: rombodawg/Llama-3-8B-Instruct-Coder
parameters:
density: 0.55
weight: 0.05
- model: rombodawg/Llama-3-8B-Base-Coder-v3.5-10k
parameters:
density: 0.55
weight: 0.05
- model: defog/llama-3-sqlcoder-8b
parameters:
density: 0.55
weight: 0.05
- model: Locutusque/llama-3-neural-chat-v2.2-8B
parameters:
density: 0.55
weight: 0.05
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
density: 0.55
weight: 0.05
- model: abacusai/Llama-3-Smaug-8B
parameters:
density: 0.55
weight: 0.05
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
|
CNBOOMBOOM/peft-Llama-2-7b-chat-hf-10k-train | CNBOOMBOOM | 2024-05-14T21:37:21Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-05-14T21:37:10Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
CNBOOMBOOM/peft-Llama-2-7b-chat-hf-10k-train-parameters_v3 | CNBOOMBOOM | 2024-05-14T21:35:57Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-05-14T21:35:37Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
ehristoforu/Mistral-22B-2x | ehristoforu | 2024-05-14T21:31:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Vezora/Mistral-22B-v0.1",
"base_model:merge:Vezora/Mistral-22B-v0.1",
"base_model:Vezora/Mistral-22B-v0.2",
"base_model:merge:Vezora/Mistral-22B-v0.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T21:24:11Z | ---
base_model:
- Vezora/Mistral-22B-v0.2
- Vezora/Mistral-22B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Vezora/Mistral-22B-v0.2](https://huggingface.co/Vezora/Mistral-22B-v0.2)
* [Vezora/Mistral-22B-v0.1](https://huggingface.co/Vezora/Mistral-22B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Vezora/Mistral-22B-v0.2
layer_range: [0, 24]
- sources:
- model: Vezora/Mistral-22B-v0.1
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
|
mradermacher/rabbit-fish-8b-GGUF | mradermacher | 2024-05-14T21:27:20Z | 20 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:khoantap/rabbit-fish-8b",
"base_model:quantized:khoantap/rabbit-fish-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T20:59:31Z | ---
base_model: khoantap/rabbit-fish-8b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hfhfix -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/khoantap/rabbit-fish-8b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/rabbit-fish-8b-GGUF/resolve/main/rabbit-fish-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
JoPmt/mix-llama-3-8B-inst-line | JoPmt | 2024-05-14T21:20:50Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"NousResearch/Meta-Llama-3-8B-Instruct",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:finetune:NousResearch/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-21T02:33:08Z | ---
tags:
- merge
- mergekit
- lazymergekit
- NousResearch/Meta-Llama-3-8B-Instruct
base_model:
- NousResearch/Meta-Llama-3-8B-Instruct
---
# mix-llama-3-8B-inst-line
mix-llama-3-8B-inst-line is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
## 🧩 Configuration
```yaml
dtype: bfloat16
merge_method: linear
slices:
- sources:
- layer_range: [0, 32] # Assuming the first half of the model is more general and can be reduced more
model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
weight: 1.0 # Reduce the weight of the first half to make room for the second half
- layer_range: [0, 32] # Assuming the second half of the model is more specialized and can be reduced less
model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
weight: 1.0 # Maintain the weight of the second half
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JoPmt/mix-llama-3-8B-inst-line"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
EyaZr/gemma_coding | EyaZr | 2024-05-14T21:20:04Z | 151 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T21:16:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
paysage-contemplator/mandarin-english-translator | paysage-contemplator | 2024-05-14T21:15:32Z | 0 | 0 | null | [
"translation",
"region:us"
] | translation | 2024-05-14T20:57:44Z | ---
tags:
- translation
---
Mandarin-English translator
This seq2seq translation model has been created as a part of the A3 of the CS224n NLP class.
It achieves a BLEU score of 19.37 on the test set. |
sakren/minilm-emotion | sakren | 2024-05-14T21:13:50Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-14T16:32:36Z | ---
license: mit
base_model: microsoft/MiniLM-L12-H384-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: minilm-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: F1
type: f1
value: 0.7776664523081279
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# minilm-emotion
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7866
- F1: 0.7777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4807 | 1.0 | 250 | 1.1974 | 0.4675 |
| 1.062 | 2.0 | 500 | 0.8748 | 0.7255 |
| 0.8603 | 3.0 | 750 | 0.7866 | 0.7777 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mradermacher/llama3-12b-instruct-GGUF | mradermacher | 2024-05-14T21:07:27Z | 1 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ehristoforu/llama3-12b-instruct",
"base_model:quantized:ehristoforu/llama3-12b-instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T17:59:46Z | ---
base_model: ehristoforu/llama3-12b-instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ehristoforu/llama3-12b-instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q2_K.gguf) | Q2_K | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.IQ3_XS.gguf) | IQ3_XS | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q3_K_S.gguf) | Q3_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.IQ3_S.gguf) | IQ3_S | 5.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.IQ3_M.gguf) | IQ3_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q3_K_M.gguf) | Q3_K_M | 5.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q3_K_L.gguf) | Q3_K_L | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.IQ4_XS.gguf) | IQ4_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q4_K_S.gguf) | Q4_K_S | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q4_K_M.gguf) | Q4_K_M | 7.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q5_K_S.gguf) | Q5_K_S | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q5_K_M.gguf) | Q5_K_M | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q6_K.gguf) | Q6_K | 9.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-12b-instruct-GGUF/resolve/main/llama3-12b-instruct.Q8_0.gguf) | Q8_0 | 12.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lctzz540/bunboppo | lctzz540 | 2024-05-14T21:07:14Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gemma",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2024-05-12T16:32:39Z | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="lctzz540/bunboppo")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("lctzz540/bunboppo")
model = AutoModelForCausalLMWithValueHead.from_pretrained("lctzz540/bunboppo")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
yleo/EmertonOgno-7B | yleo | 2024-05-14T21:02:08Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yleo/OgnoMonarch-7B",
"yleo/EmertonMonarch-7B",
"base_model:yleo/EmertonMonarch-7B",
"base_model:merge:yleo/EmertonMonarch-7B",
"base_model:yleo/OgnoMonarch-7B",
"base_model:merge:yleo/OgnoMonarch-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T20:57:41Z | ---
tags:
- merge
- mergekit
- lazymergekit
- yleo/OgnoMonarch-7B
- yleo/EmertonMonarch-7B
base_model:
- yleo/OgnoMonarch-7B
- yleo/EmertonMonarch-7B
---
# yleo/EmertonOgno-7B
yleo/EmertonOgno-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yleo/OgnoMonarch-7B](https://huggingface.co/yleo/OgnoMonarch-7B)
* [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yleo/OgnoMonarch-7B
layer_range: [0, 32]
- model: yleo/EmertonMonarch-7B
layer_range: [0, 32]
merge_method: slerp
base_model: yleo/OgnoMonarch-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "yleo/EmertonOgno-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Felladrin/gguf-sharded-smashed-WizardLM-2-7B | Felladrin | 2024-05-14T21:01:56Z | 1 | 0 | null | [
"gguf",
"base_model:PrunaAI/WizardLM-2-7B-GGUF-smashed",
"base_model:quantized:PrunaAI/WizardLM-2-7B-GGUF-smashed",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T20:53:28Z | ---
base_model: PrunaAI/WizardLM-2-7B-GGUF-smashed
---
Sharded GGUF version of [PrunaAI/WizardLM-2-7B-GGUF-smashed](https://huggingface.co/PrunaAI/WizardLM-2-7B-GGUF-smashed).
|
llmware/industry-bert-asset-management-v0.1 | llmware | 2024-05-14T21:00:09Z | 172 | 7 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"license:apache-2.0",
"text-embeddings-inference",
"region:us"
] | feature-extraction | 2023-09-29T20:51:37Z | ---
license: apache-2.0
inference: false
---
# industry-bert-asset-management-v0.1
<!-- Provide a quick summary of what the model is/does. -->
industry-bert-asset-management-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models.
### Model Description
<!-- Provide a longer summary of what this model is. -->
industry-bert-asset-management-v0.1 is a domain fine-tuned BERT-based 768-parameter Sentence Transformer model, intended to as a "drop-in"
substitute for embeddings in the asset management domain. This model was trained on a wide range of publicly available documents regarding the asset management industry.
- **Developed by:** llmware
- **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below.
### Model Use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("llmware/industry-bert-asset-management-v0.1")
model = AutoModel.from_pretrained("llmware/industry-bert-asset-management-v0.1")
## Bias, Risks, and Limitations
This is a semantic embedding model, fine-tuned on public domain documents regarding the business, financials and companies in the asset
management industry. Results may vary if used outside of this domain, and like any embedding model, there is always the potential for anomalies in the vector embedding space. No specific safeguards have
put in place for safety or mitigate potential bias in the dataset.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
This model was fine-tuned using a custom self-supervised procedure and custom dataset that combined contrastive techniques
with stochastic injections of distortions in the samples. The methodology was derived, adapted and inspired primarily from
three research papers cited below: TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
## Citation [optional]
Custom self-supervised training protocol used to train the model, which was derived and inspired by the following papers:
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
@inproceedings{giorgi-etal-2021-declutr,
title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
year = 2021,
month = aug,
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {879--895},
doi = {10.18653/v1/2021.acl-long.72},
url = {https://aclanthology.org/2021.acl-long.72}
}
@article{Carlsson-2021-CT,
title = {Semantic Re-tuning with Contrastive Tension},
author= {Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren},
year= {2021},
month= {"January"}
Published: 12 Jan 2021, Last Modified: 05 May 2023
}
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
## Model Card Contact
Darren Oberst @ llmware
|
llmware/industry-bert-insurance-v0.1 | llmware | 2024-05-14T20:58:02Z | 273 | 11 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"license:apache-2.0",
"text-embeddings-inference",
"region:us"
] | feature-extraction | 2023-09-29T21:15:45Z | ---
license: apache-2.0
inference: false
---
# industry-bert-insurance-v0.1
<!-- Provide a quick summary of what the model is/does. -->
industry-bert-insurance-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models.
### Model Description
<!-- Provide a longer summary of what this model is. -->
industry-bert-insurance-v0.1 is a domain fine-tuned BERT-based 768-parameter Sentence Transformer model, intended to as a "drop-in"
substitute for embeddings in the insurance industry domain. This model was trained on a wide range of publicly available documents on the insurance industry.
- **Developed by:** llmware
- **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below.
## Model Use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("llmware/industry-bert-insurance-v0.1")
model = AutoModel.from_pretrained("llmware/industry-bert-insurance-v0.1")
## Bias, Risks, and Limitations
This is a semantic embedding model, fine-tuned on public domain documents about the insurance industry. Results may vary if used outside of this
domain, and like any embedding model, there is always the potential for anomalies in the vector embedding space. No specific safeguards have
put in place for safety or mitigate potential bias in the dataset.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
This model was fine-tuned using a custom self-supervised procedure and custom dataset that combined contrastive techniques
with stochastic injections of distortions in the samples. The methodology was derived, adapted and inspired primarily from
three research papers cited below: TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
## Citation [optional]
Custom self-supervised training protocol used to train the model, which was derived and inspired by the following papers:
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
@inproceedings{giorgi-etal-2021-declutr,
title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
year = 2021,
month = aug,
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {879--895},
doi = {10.18653/v1/2021.acl-long.72},
url = {https://aclanthology.org/2021.acl-long.72}
}
@article{Carlsson-2021-CT,
title = {Semantic Re-tuning with Contrastive Tension},
author= {Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren},
year= {2021},
month= {"January"}
Published: 12 Jan 2021, Last Modified: 05 May 2023
}
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
## Model Card Contact
Darren Oberst @ llmware
|
llm-wizard/leagaleasy-llama-3-instruct-v2 | llm-wizard | 2024-05-14T20:56:19Z | 6 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-04-24T14:29:44Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- generator
model-index:
- name: leagaleasy-llama-3-instruct-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leagaleasy-llama-3-instruct-v2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
aprioni/20240514-llama3-cami-dianne-qa | aprioni | 2024-05-14T20:56:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T20:55:57Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** aprioni
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
llmware/industry-bert-loans | llmware | 2024-05-14T20:52:55Z | 576 | 2 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"license:apache-2.0",
"text-embeddings-inference",
"region:us"
] | feature-extraction | 2024-02-28T09:13:40Z | ---
license: apache-2.0
inference: false
---
# industry-bert-loans
<!-- Provide a quick summary of what the model is/does. -->
industry-bert-loans is part of a series of industry-fine-tuned sentence_transformer embedding models.
### Model Description
<!-- Provide a longer summary of what this model is. -->
industry-bert-loans is a domain fine-tuned BERT-based 768-parameter Sentence Transformer model, intended to as a "drop-in"
substitute optimized for loan agreements. This model was trained on a wide range of publicly available commercial lending agreements.
- **Developed by:** llmware
- **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below.
## Model Use
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("llmware/industry-bert-loans")
model = AutoModel.from_pretrained("llmware/industry-bert-loans")
## Bias, Risks, and Limitations
This is a semantic embedding model, fine-tuned on publicly available loan, security, credit and underwriting agreements. Results may vary if used outside of this
domain, and like any embedding model, there is always the potential for anomalies in the vector embedding space. No specific safeguards have
put in place for safety or mitigate potential bias in the dataset.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
This model was fine-tuned using a custom self-supervised procedure and custom dataset that combined contrastive techniques
with stochastic injections of distortions in the samples. The methodology was derived, adapted and inspired primarily from
three research papers cited below: TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
## Citation [optional]
Custom self-supervised training protocol used to train the model, which was derived and inspired by the following papers:
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
@inproceedings{giorgi-etal-2021-declutr,
title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
year = 2021,
month = aug,
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {879--895},
doi = {10.18653/v1/2021.acl-long.72},
url = {https://aclanthology.org/2021.acl-long.72}
}
@article{Carlsson-2021-CT,
title = {Semantic Re-tuning with Contrastive Tension},
author= {Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren},
year= {2021},
month= {"January"}
Published: 12 Jan 2021, Last Modified: 05 May 2023
}
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
## Model Card Contact
Darren Oberst @ llmware
|
fhnw/Llama-3-pineapple-2x8B | fhnw | 2024-05-14T20:48:44Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"fhnw/Llama-3-8B-pineapple-pizza-orpo",
"fhnw/Llama-3-8B-pineapple-recipe-sft",
"conversational",
"base_model:fhnw/Llama-3-8B-pineapple-pizza-orpo",
"base_model:merge:fhnw/Llama-3-8B-pineapple-pizza-orpo",
"base_model:fhnw/Llama-3-8B-pineapple-recipe-sft",
"base_model:merge:fhnw/Llama-3-8B-pineapple-recipe-sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T20:40:38Z | ---
tags:
- moe
- frankenmoe
- merge
- mergekit
- fhnw/Llama-3-8B-pineapple-pizza-orpo
- fhnw/Llama-3-8B-pineapple-recipe-sft
base_model:
- fhnw/Llama-3-8B-pineapple-pizza-orpo
- fhnw/Llama-3-8B-pineapple-recipe-sft
---
# Llama-3-pineapple-2x8B
Llama-3-pineapple-2x8B is a Mixture of Experts (MoE) made with the following models:
* [fhnw/Llama-3-8B-pineapple-pizza-orpo](https://huggingface.co/fhnw/Llama-3-8B-pineapple-pizza-orpo)
* [fhnw/Llama-3-8B-pineapple-recipe-sft](https://huggingface.co/fhnw/Llama-3-8B-pineapple-recipe-sft)
## Configuration
```yaml
base_model: fhnw/Llama-3-8B-pineapple-pizza-orpo
experts:
- source_model: fhnw/Llama-3-8B-pineapple-pizza-orpo
positive_prompts: ["assistant", "chat"]
- source_model: fhnw/Llama-3-8B-pineapple-recipe-sft
positive_prompts: ["recipe"]
gate_mode: hidden
dtype: float16
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "fhnw/Llama-3-pineapple-2x8B"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(device)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Is pineapple on a pizza a crime?"}
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
``` |
Slvcxc/saiga_llama3_8b-kto-v3-8.0bpw-h8-exl2 | Slvcxc | 2024-05-14T20:47:15Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama3",
"8-bit",
"conversational",
"ru",
"base_model:IlyaGusev/saiga_llama3_8b",
"base_model:quantized:IlyaGusev/saiga_llama3_8b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-14T19:52:17Z | ---
language:
- ru
base_model:
- IlyaGusev/saiga_llama3_8b
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
tags:
- llama3
- 8-bit
---
## **saiga_llama3_8b**
[exllamav2](https://github.com/turboderp/exllamav2) quant for [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b)
**Original model information:**
# Saiga/Llama3 8B, Russian Llama-3-based chatbot
Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf)
**ОСТОРОЖНО! WARNING! LET OP!**
I've changed the prompt format from ChatML to **the original Llama-3 format in v4**. Don't forget to switch formats!
**v4**: LLama-3 prompt format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|eot_id|><|start_header_id|>user<|end_header_id|>
Как дела?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Отлично, а у тебя?<|eot_id|><|start_header_id|>user<|end_header_id|>
Шикарно. Как пройти в библиотеку?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**v2, v3**: ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Как дела?<|im_end|>
<|im_start|>assistant
Отлично, а у тебя?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием хлорофилла, пигмента, который поглощает свет и передает энергию для фотосинтеза. Хлорофилл содержится в клетках листьев растений и играет ключевую роль в процессе фотосинтеза, когда растения используют солнечный свет для превращения воды и углекислого газа в органические вещества, такие как глюкоза и олигоэфиры.
Хлорофилл состоит из двух основных форм: хлорофилла а и хлорофилла б. Хлорофилл а поглощает свет с длиной волны около 430 нанометров (нм), что соответствует синему или голубому свету, а хлорофилл б поглощает свет с длиной волны около 660 нм, что соответствует красному свету. Когда свет поглощается хлорофиллом, он передается в молекулы воды и углекислого газа, которые используются для фотосинтеза.
Таким образом, хлорофилл является основным компонентом, ответственным за зеленый цвет листьев растений, так как он поглощает свет и передает его энергию для фотосинтеза.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды в солнечную погоду на поле для игры в футбол приехала группа друзей, чтобы провести время вместе. В числе их был и Таня, которая была страстной футбольной фанаткой и всегда радовал своих друзей своим энтузиазмом и умением забивать мячи.
В этот день, как обычно, друзья решили провести соревнование между собой, чтобы определить кто из них является лучшим футболистом. Таня была очень уверена в своих силах и готовилась к этому матчу с особой серьёзностью.
Когда все друзья собрались на поле, они увидели, что перед ними стоял огромный мяч, который должен был стать предметом состязания. Мяч был огромным и тяжелым, и его размеры были необычайно большими по сравнению с обычными мячами, которые используются в футболе.
Таня была первая, кто решил начать игру. Она подошла к мячу и начала его удерживать, стараясь выдержать его вес и силу. Но мяч оказался настолько тяжелым, что Таня не смогла удержать его и он упал на землю.
Друзья посмеялись над ее неудачей, но Таня не отчаивалась и продолжила пытаться удержать мяч. Она стала использовать все свои силы и умения, чтобы выдержать его вес и силу. Наконец, после долгих усилий, она смогла удержать мяч и начала его бросать в сторону.
Мяч летел высоко вверх, и друзья смотрели, как он пролетает над полем. Но мяч неожиданно повернул и стал лететь обратно к Тане. Она успела поймать его и продолжила играть, используя все свои навыки и умения.
```
kto:
- v4 base
- [8d878ef8566b5ce3ab8da72e42a815a6976a60a7](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/8d878ef8566b5ce3ab8da72e42a815a6976a60a7)
- dataset: [lmsys_clean_ru_preferences](https://huggingface.co/datasets/IlyaGusev/lmsys_clean_ru_preferences)
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/yjddtu1q)
v4:
- [1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dcbs9ttt)
v3:
- [c588356cd60bdee54d52c2dd5a2445acca8aa5c3](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/c588356cd60bdee54d52c2dd5a2445acca8aa5c3)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ltoqdsal)
v2:
- [ae61b4f9b34fac9856d361ea78c66284a00e4f0b](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/ae61b4f9b34fac9856d361ea78c66284a00e4f0b)
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
# Evaluation
* Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl
* Framework: https://github.com/tatsu-lab/alpaca_eval
* Evaluator: alpaca_eval_cot_gpt4_turbo_fn
| model | length_controlled_winrate | win_rate | standard_error | avg_length |
|-----|-----|-----|-----|-----|
|chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 |
|chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 |
|saiga_llama3_8b, kto | 48.72 | 73.37 | 2.17 | 1511 |
|saiga_llama3_8b, v4 | 43.64 | 65.90 | 2.31 | 1200 |
|saiga_llama3_8b, v3 | 36.97 | 61.08 | 2.38 | 1162 |
|saiga_llama3_8b, v2 | 33.07 | 48.19 | 2.45 | 1166 |
|saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 |
|
mradermacher/Yi-1.5-dolphin-9B-GGUF | mradermacher | 2024-05-14T20:40:34Z | 96 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"dataset:Vezora/Dolphin1m_gpt4_Alpaca_format",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T20:09:13Z | ---
base_model: NotAiLOL/Yi-1.5-dolphin-9B
datasets:
- Vezora/Dolphin1m_gpt4_Alpaca_format
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/NotAiLOL/Yi-1.5-dolphin-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Yi-1.5-dolphin-9B-GGUF/resolve/main/Yi-1.5-dolphin-9B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Plasmoxy/drumold-tiny-224-rm0.2 | Plasmoxy | 2024-05-14T20:38:24Z | 196 | 1 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-14T20:38:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mag0g/Ezekiel27_7 | Mag0g | 2024-05-14T20:36:53Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T20:34:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Abin7/bangla-chinese-romania-hindi | Abin7 | 2024-05-14T20:35:13Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:BanglaLLM/bangla-llama-7b-instruct-v0.1",
"base_model:merge:BanglaLLM/bangla-llama-7b-instruct-v0.1",
"base_model:FlagAlpha/Llama2-Chinese-7b-Chat",
"base_model:merge:FlagAlpha/Llama2-Chinese-7b-Chat",
"base_model:Rishabh02/Llama-2-7b-hindi_fine_fine",
"base_model:merge:Rishabh02/Llama-2-7b-hindi_fine_fine",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T20:30:42Z | ---
base_model:
- andreidima/Llama-2-7b-Romanian
- Rishabh02/Llama-2-7b-hindi_fine_fine
- BanglaLLM/bangla-llama-7b-instruct-v0.1
- FlagAlpha/Llama2-Chinese-7b-Chat
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [FlagAlpha/Llama2-Chinese-7b-Chat](https://huggingface.co/FlagAlpha/Llama2-Chinese-7b-Chat) as a base.
### Models Merged
The following models were included in the merge:
* [andreidima/Llama-2-7b-Romanian](https://huggingface.co/andreidima/Llama-2-7b-Romanian)
* [Rishabh02/Llama-2-7b-hindi_fine_fine](https://huggingface.co/Rishabh02/Llama-2-7b-hindi_fine_fine)
* [BanglaLLM/bangla-llama-7b-instruct-v0.1](https://huggingface.co/BanglaLLM/bangla-llama-7b-instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: "FlagAlpha/Llama2-Chinese-7b-Chat"
parameters:
density: 0.2
weight: 0.2
- model: "BanglaLLM/bangla-llama-7b-instruct-v0.1"
parameters:
density: 0.2
weight: 0.2
- model: "andreidima/Llama-2-7b-Romanian"
parameters:
density: 0.2
weight: 0.2
- model: "Rishabh02/Llama-2-7b-hindi_fine_fine"
parameters:
density: 0.4
weight: 0.4
merge_method: "ties"
base_model: "FlagAlpha/Llama2-Chinese-7b-Chat"
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
chasche/bert-finetuned-ner | chasche | 2024-05-14T20:34:45Z | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-06T12:43:03Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9335
- Recall: 0.9514
- F1: 0.9423
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0752 | 1.0 | 1756 | 0.0733 | 0.8966 | 0.9280 | 0.9120 | 0.9801 |
| 0.0346 | 2.0 | 3512 | 0.0702 | 0.9296 | 0.9445 | 0.9370 | 0.9846 |
| 0.0222 | 3.0 | 5268 | 0.0607 | 0.9335 | 0.9514 | 0.9423 | 0.9865 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ZcepZtar/DaToSw_V1.1 | ZcepZtar | 2024-05-14T20:29:36Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-14T20:29:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DestinedAlpha/ayano | DestinedAlpha | 2024-05-14T20:28:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-14T20:28:10Z | ---
license: apache-2.0
---
|
WlappaAI/dracor-ru-small-lora | WlappaAI | 2024-05-14T20:23:21Z | 1 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"base_model:WlappaAI/Mistral-7B-wikipedia_ru_pruned-0.1_merged",
"base_model:adapter:WlappaAI/Mistral-7B-wikipedia_ru_pruned-0.1_merged",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-14T20:04:13Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: WlappaAI/Mistral-7B-wikipedia_ru_pruned-0.1_merged
model-index:
- name: dracor-ru-small-lora
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: WlappaAI/Mistral-7B-wikipedia_ru_pruned-0.1_merged
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: ./datasets/ru-dracor
type: completion
field: text
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./models/output/dracor_ru_lora
adapter: lora
lora_model_dir:
sequence_len: 1024
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 6
num_epochs: 1
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps:
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 1
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# dracor-ru-small-lora
This model is a fine-tuned version of [WlappaAI/Mistral-7B-wikipedia_ru_pruned-0.1_merged](https://huggingface.co/WlappaAI/Mistral-7B-wikipedia_ru_pruned-0.1_merged) on Russian DraCor dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6606
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7921 | 1.0 | 1056 | 1.6606 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0 |
intelpen/Meta-llama-3-8B-GPTQ-4Bits-wikitext2 | intelpen | 2024-05-14T20:23:08Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-05-14T20:19:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sirjan/llama3-8B-sft-bnb-4bit-stnaskphy | Sirjan | 2024-05-14T20:11:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T13:58:37Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Sirjan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
benjamin-paine/aniportrait | benjamin-paine | 2024-05-14T20:11:48Z | 44 | 21 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2403.17694",
"arxiv:1706.03762",
"license:apache-2.0",
"diffusers:AniPortraitPipeline",
"region:us"
] | null | 2024-03-28T15:24:05Z | ---
license: apache-2.0
---
This repository contains a pruned and partially reorganized version of [AniPortrait](https://github.com/Zejun-Yang/AniPortrait), with some new features.
```
@misc{wei2024aniportrait,
title={AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animations},
author={Huawei Wei and Zejun Yang and Zhisheng Wang},
year={2024},
eprint={2403.17694},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
# Added Features
## CPU Offloading
In order to maximize VRAM, CPU offloading is made available and enabled by default when using the command-line.
- In python, use `pipeline.enable_model_cpu_offload(gpu_id: int=0)` to enable it.
- In the command-line, pass `--no-offload` or `-no` to disable it.
## Video Reference
In addition to using an image as reference, this repository permits using a video as a reference image using a modified temporally-aware reference U-Net.
### Automatic Masked Composition
When using a video reference, you can pass `paste_back=True` or `--paste-back/-pb` to automatically re-composite the generated face onto the reference video.
### Automatic Face Cropping
In addition to the above, you can pass `crop_to_face=True` or `--crop-to-face/-ctf` to automatically crop the input video to the detected face region prior to inference, then un-crop and compose the results. This means you can directly use this feature on HD video without needing to pre-process any of the inputs.
### Examples
The following examples were produced using the above-referenced procedure, then interpolated and post-processed using [enfugue](https://github.com/painebenjamin/app.enfugue.ai).
They were made using the source videos at Full HD, using the following command-line format:
```sh
aniportrait video.mp4 --video video.mp4 --audio audio.mp3 --crop-to-face --num-inference-steps 50 -cfg 4.5
```
All videos sourced from [Pexels](https://www.pexels.com/), voices are AI-generated reading [Attention is All You Need (ARXIV:1706.03762)](https://arxiv.org/abs/1706.03762)
<table>
<tr>
<td>
<video controls src="https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/nW0kiagVIP2Nr_K_SjJUP.mp4"</video>
</td>
<td>
<video controls src="https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/FOj4drfm2OLa-n7JxYyIj.mp4"></video>
</td>
<td>
<video controls src="https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/Tu2ac8Cuvaexo_S-kgpPv.mp4"></video>
</td>
</tr>
</table>
# Usage
## Installation
First, install the AniPortrait package into your python environment. If you're creating a new environment for AniPortrait, be sure you also specify the version of torch you want with CUDA support, or else this will try to run only on CPU.
```sh
pip install git+https://github.com/painebenjamin/aniportrait.git
```
## Command-Line
A command-line utility `aniportrait` is installed with the package.
```sh
Usage: aniportrait [OPTIONS] INPUT_IMAGE_OR_VIDEO
Run AniPortrait on an input image with a video, and/or audio file. When only
a video file is provided, a video-to-video (face reenactment) animation is
performed. When only an audio file is provided, an audio-to-video (lip-sync)
animation is performed. When both a video and audio file are provided, a
video-to-video animation is performed with the audio as guidance for the
face and mouth movements.
Options:
-v, --video FILE Video file to drive the animation.
-a, --audio FILE Audio file to drive the animation.
-fps, --frame-rate INTEGER Video FPS. Also controls the sampling rate
of the audio. Will default to the video FPS
if a video file is provided, or 30 if not.
-cfg, --guidance-scale FLOAT Guidance scale for the diffusion process.
[default: 3.5]
-ns, --num-inference-steps INTEGER
Number of diffusion steps. [default: 20]
-cf, --context-frames INTEGER Number of context frames to use. [default:
16]
-co, --context-overlap INTEGER Number of context frames to overlap.
[default: 4]
-nf, --num-frames INTEGER An explicit number of frames to use. When
not passed, use the length of the audio or
video
-s, --seed INTEGER Random seed.
-w, --width INTEGER Output video width. Defaults to the input
image width.
-h, --height INTEGER Output video height. Defaults to the input
image height.
-m, --model TEXT HuggingFace model name.
-nh, --no-half Do not use half precision.
-no, --no-offload Do not offload to the CPU to preserve GPU
memory.
-g, --gpu-id INTEGER GPU ID to use.
-sf, --model-single-file Download and use a single file instead of a
directory.
-cf, --config-file TEXT Config file to use when using the model-
single-file option. Accepts a path or a
filename in the same directory as the single
file. Will download from the repository
passed in the model option if not provided.
[default: config.json]
-mf, --model-filename TEXT The model file to download when using the
model-single-file option. [default:
aniportrait.safetensors]
-rs, --remote-subfolder TEXT Remote subfolder to download from when using
the model-single-file option.
-cd, --cache-dir DIRECTORY Cache directory to download to. Default uses
the huggingface cache.
-o, --output FILE Output file. [default: output.mp4]
-pb, --paste-back Paste the original background back in.
-pbcf, --paste-back-color-fix [adain|wavelet]
Color fix method to use when pasting back.
[default: wavelet]
-ctf, --crop-to-face Crop the input to the face prior to
execution, then merge the cropped result
with the uncropped image. Implies --paste-
back.
-pop, --pose-output FILE When passed, save the pose image(s) to this
file.
-mop, --mask-output FILE When passed, save the mask image(s) to this
file.
-cop, --combined-output FILE When passed, save the combined image(s) to
this file.
-mb, --mask-blur INTEGER Amount of blur to apply to the mask when
using cropping or pasting. [default: 15]
-md, --mask-dilate INTEGER Amount of dilation to apply to the mask when
using cropping or pasting. [default: 31]
-ms, --mask-slow Use a slower, more accurate mask generation
method.
-lss, --leading-seconds-silence FLOAT
Seconds of silence to add to the beginning
of the audio. [default: 0.0]
-tss, --trailing-seconds-silence FLOAT
Seconds of silence to add to the end of the
audio. [default: 0.0]
--help Show this message and exit.
```
## Python
You can create the pipeline, automatically pulling the weights from this repository, either as individual models:
```py
from aniportrait import AniPortraitPipeline
pipeline = AniPortraitPipeline.from_pretrained(
"benjamin-paine/aniportrait",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda", dtype=torch.float16)
```
Or, as a single file:
```py
from aniportrait import AniPortraitPipeline
pipeline = AniPortraitPipeline.from_single_file(
"benjamin-paine/aniportrait",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda", dtype=torch.float16)
```
The `AniPortraitPipeline` is a mega pipeline, capable of instantiating and executing other pipelines. It provides the following functions:
## Workflows
### img2img
```py
pipeline.img2img(
reference_image: PIL.Image.Image,
pose_reference_image: PIL.Image.Image,
num_inference_steps: int,
guidance_scale: float,
eta: float=0.0,
reference_pose_image: Optional[Image.Image]=None,
generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
output_type: Optional[str]="pil",
return_dict: bool=True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
callback_steps: Optional[int]=None,
width: Optional[int]=None,
height: Optional[int]=None,
**kwargs: Any
) -> Pose2VideoPipelineOutput
```
Using a reference image (for structure) and a pose reference image (for pose), render an image of the former in the pose of the latter.
- The pose reference image here is an unprocessed image, from which the face pose will be extracted.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
### vid2vid
```py
pipeline.vid2vid(
reference_image: PIL.Image.Image,
pose_reference_images: List[PIL.Image.Image],
num_inference_steps: int,
guidance_scale: float,
eta: float=0.0,
reference_pose_image: Optional[Image.Image]=None,
generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
output_type: Optional[str]="pil",
return_dict: bool=True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
callback_steps: Optional[int]=None,
width: Optional[int]=None,
height: Optional[int]=None,
video_length: Optional[int]=None,
context_schedule: str="uniform",
context_frames: int=16,
context_overlap: int=4,
context_batch_size: int=1,
interpolation_factor: int=1,
use_long_video: bool=True,
**kwargs: Any
) -> Pose2VideoPipelineOutput
```
Using a reference image (for structure) and a sequence of pose reference images (for pose), render a video of the former in the poses of the latter, using context windowing for long-video generation when the poses are longer than 16 frames.
- Optionally pass `use_long_video = false` to disable using the long video pipeline.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose reference images.
### audio2vid
```py
pipeline.audio2vid(
audio: str,
reference_image: PIL.Image.Image,
num_inference_steps: int,
guidance_scale: float,
fps: int=30,
eta: float=0.0,
reference_pose_image: Optional[Image.Image]=None,
pose_reference_images: Optional[List[PIL.Image.Image]]=None,
generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
output_type: Optional[str]="pil",
return_dict: bool=True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
callback_steps: Optional[int]=None,
width: Optional[int]=None,
height: Optional[int]=None,
video_length: Optional[int]=None,
context_schedule: str="uniform",
context_frames: int=16,
context_overlap: int=4,
context_batch_size: int=1,
interpolation_factor: int=1,
use_long_video: bool=True,
pose_filename: Optional[str]=None,
leading_seconds_silence: float=0.0,
trailing_seconds_silence: float=0.0,
**kwargs: Any
) -> Pose2VideoPipelineOutput
```
Using an audio file, draw `fps` face pose images per second for the duration of the audio. Then, using those face pose images, render a video.
- Optionally include a list of images to extract the poses from prior to merging with audio-generated poses (in essence, pass a video here to control non-speech motion). The default is a moderately active loop of head movement.
- Optionally pass width/height to modify the size. Defaults to reference image size.
- Optionally pass `use_long_video = false` to disable using the long video pipeline.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose reference images or the length of the audio frames (when translated to the proper FPS) - whichever is shorter.
- Optionally pass `leading_seconds_silence` and/or `trailing_seconds_silence` to add silent frame(s) to the beginning and/or end of the audio. This will be adjusted for your passed or detected frame rate.
- Optionally pass `pose_filename`, `mask_filename` and/or `combined_filename` to save the pose, mask and/or combined frames to this video file for debugging.
### audiovid2vid
```py
pipeline.audiovid2vid(
audio: str,
reference_image: List[Image.Image],
num_inference_steps: int=25,
guidance_scale: float=3.5,
fps: int=30,
eta: float=0.0,
reference_pose_image: Optional[Image.Image]=None,
pose_reference_images: Optional[List[Image.Image]]=None,
generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
output_type: Optional[str]="pil",
return_dict: bool=True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
callback_steps: Optional[int]=None,
context_schedule: str="uniform",
context_frames: int=16,
context_overlap: int=4,
context_batch_size: int=1,
interpolation_factor: int=1,
width: Optional[int]=None,
height: Optional[int]=None,
video_length: Optional[int]=None,
use_long_video: bool=True,
paste_back: bool=True,
paste_back_color_fix: Optional[Literal["wavelet", "adain"]]="wavelet",
crop_to_face: bool=False,
crop_to_face_target_size: Optional[int]=512,
crop_to_face_padding: Optional[int]=64,
mask_filename: Optional[str]=None,
pose_filename: Optional[str]=None,
combined_filename: Optional[str]=None,
mask_dilate: Optional[int]=31,
mask_gaussian_kernel_size: Optional[int]=15,
mask_first_frame: bool=True,
leading_seconds_silence: float=0.0,
trailing_seconds_silence: float=0.0,
**kwargs: Any
) -> Pose2VideoPipelineOutput:
```
Using an audio file, draw `fps` face pose images per second for the duration of the audio. Then, using those face pose images, render a video using a video as a reference.
- Optionally pass width/height to modify the size. Defaults to reference image size.
- Optionally pass `use_long_video = false` to disable using the long video pipeline.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose reference images or the length of the audio frames (when translated to the proper FPS) - whichever is shorter.
- Optionally pass `paste_back = true` to re-composite the output onto the input.
- When using `paste_back`, the face is color-fixed when re-pasting in order to reduce visible difference. The default method is `wavelet`, pass `adain` or `None` for other options.
- Optionally pass `crop_to_face = true` to crop all images to the face region (with padding) prior to diffusion. This implies `paste_back = true`.
- When using `crop_to_face`, we must first identify where the faces are in a potentially large image - to do this we perform a slow tiled face detection across the whole image. In order to reduce time to generate additional masks, the default behavior is to only do this once, then use the faces from the first frame to know where to detect in subsequent frames. Set this to `false` to perform tiled faced detection on every frame of the input - this is slower but will allow for more variability in the frame.
- Optionally pass `leading_seconds_silence` and/or `trailing_seconds_silence` to add silent frame(s) to the beginning and/or end of the audio. This will be adjusted for your passed or detected frame rate.
- Optionally pass `pose_filename`, `mask_filename` and/or `combined_filename` to save the pose, mask and/or combined frames to this video file for debugging.
## Internals/Helpers
### img2pose
```py
pipeline.img2pose(
reference_image: PIL.Image.Image,
width: Optional[int]=None,
height: Optional[int]=None
) -> PIL.Image.Image
```
Detects face landmarks in an image and draws a face pose image.
- Optionally modify the original width and height.
### vid2pose
```py
pipeline.vid2pose(
reference_image: PIL.Image.Image,
retarget_image: Optional[PIL.Image.Image],
width: Optional[int]=None,
height: Optional[int]=None
) -> List[PIL.Image.Image]
```
Detects face landmarks in a series of images and draws pose images.
- Optionally modify the original width and height.
- Optionally retarget to a different face position, useful for video-to-video tasks.
### audio2pose
```py
pipeline.audio2pose(
audio_path: str,
fps: int=30,
reference_image: Optional[PIL.Image.Image]=None,
pose_reference_images: Optional[List[PIL.Image.Image]]=None,
width: Optional[int]=None,
height: Optional[int]=None
) -> List[PIL.Image.Image]
```
Using an audio file, draw `fps` face pose images per second for the duration of the audio.
- Optionally include a reference image to extract the face shape and initial position from. Default has a generic androgynous face shape.
- Optionally include a list of images to extract the poses from prior to merging with audio-generated poses (in essence, pass a video here to control non-speech motion). The default is a moderately active loop of head movement.
- Optionally pass width/height to modify the size. Defaults to reference image size, then pose image sizes, then 256.
### pose2img
```py
pipeline.pose2img(
reference_image: PIL.Image.Image,
pose_image: PIL.Image.Image,
num_inference_steps: int,
guidance_scale: float,
eta: float=0.0,
reference_pose_image: Optional[Image.Image]=None,
generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
output_type: Optional[str]="pil",
return_dict: bool=True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
callback_steps: Optional[int]=None,
width: Optional[int]=None,
height: Optional[int]=None,
**kwargs: Any
) -> Pose2VideoPipelineOutput
```
Using a reference image (for structure) and a pose image (for pose), render an image of the former in the pose of the latter.
- The pose image here is a processed face pose. To pass a non-processed face pose, see `img2img`.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
### pose2vid
```py
pipeline.pose2vid(
reference_image: PIL.Image.Image,
pose_images: List[PIL.Image.Image],
num_inference_steps: int,
guidance_scale: float,
eta: float=0.0,
reference_pose_image: Optional[Image.Image]=None,
generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
output_type: Optional[str]="pil",
return_dict: bool=True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
callback_steps: Optional[int]=None,
width: Optional[int]=None,
height: Optional[int]=None,
video_length: Optional[int]=None,
**kwargs: Any
) -> Pose2VideoPipelineOutput
```
Using a reference image (for structure) and pose images (for pose), render a video of the former in the poses of the latter.
- The pose images here are a processed face poses. To non-processed face poses, see `vid2vid`.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose images.
### pose2vid_long
```py
pipeline.pose2vid_long(
reference_image: PIL.Image.Image,
pose_images: List[PIL.Image.Image],
num_inference_steps: int,
guidance_scale: float,
eta: float=0.0,
reference_pose_image: Optional[Image.Image]=None,
generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
output_type: Optional[str]="pil",
return_dict: bool=True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
callback_steps: Optional[int]=None,
width: Optional[int]=None,
height: Optional[int]=None,
video_length: Optional[int]=None,
context_schedule: str="uniform",
context_frames: int=16,
context_overlap: int=4,
context_batch_size: int=1,
interpolation_factor: int=1,
**kwargs: Any
) -> Pose2VideoPipelineOutput
```
Using a reference image (for structure) and pose images (for pose), render a video of the former in the poses of the latter, using context windowing for long-video generation.
- The pose images here are a processed face poses. To non-processed face poses, see `vid2vid`.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose images.
|
abbenedek/whisper-tiny.en-finetuned | abbenedek | 2024-05-14T20:11:02Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny.en",
"base_model:finetune:openai/whisper-tiny.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-14T13:18:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: openai/whisper-tiny.en
metrics:
- wer
model-index:
- name: whisper-tiny.en-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny.en-finetuned
This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4890
- Wer: 0.8750
- Cer: 0.4454
- Ser: 0.055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 220
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Ser |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:-----:|
| 6.4419 | 0.4 | 10 | 6.4480 | 45.25 | 16.3573 | 1.0 |
| 6.2706 | 0.8 | 20 | 5.9268 | 39.375 | 14.1797 | 1.0 |
| 5.3444 | 1.2 | 30 | 4.8418 | 19.875 | 7.3002 | 1.0 |
| 4.3269 | 1.6 | 40 | 3.9103 | 9.375 | 3.8357 | 1.0 |
| 3.6103 | 2.0 | 50 | 3.3228 | 7.0000 | 3.2170 | 1.0 |
| 3.0331 | 2.4 | 60 | 2.7725 | 5.375 | 2.3509 | 1.0 |
| 2.5529 | 2.8 | 70 | 2.3913 | 4.75 | 1.8312 | 1.0 |
| 2.2359 | 3.2 | 80 | 2.1217 | 3.375 | 1.4353 | 1.0 |
| 1.9777 | 3.6 | 90 | 1.8790 | 3.0 | 1.3116 | 1.0 |
| 1.7863 | 4.0 | 100 | 1.7402 | 2.875 | 1.0393 | 0.98 |
| 1.6854 | 4.4 | 110 | 1.6736 | 2.625 | 0.9651 | 0.925 |
| 1.6297 | 4.8 | 120 | 1.6290 | 2.625 | 0.9404 | 0.765 |
| 1.5974 | 5.2 | 130 | 1.5959 | 2.5 | 0.9651 | 0.57 |
| 1.5673 | 5.6 | 140 | 1.5702 | 1.5 | 0.6434 | 0.385 |
| 1.5397 | 6.0 | 150 | 1.5497 | 1.25 | 0.5692 | 0.27 |
| 1.524 | 6.4 | 160 | 1.5333 | 1.125 | 0.5444 | 0.22 |
| 1.509 | 6.8 | 170 | 1.5199 | 0.8750 | 0.4454 | 0.155 |
| 1.4986 | 7.2 | 180 | 1.5091 | 0.8750 | 0.4454 | 0.095 |
| 1.4872 | 7.6 | 190 | 1.5011 | 0.8750 | 0.4454 | 0.075 |
| 1.4844 | 8.0 | 200 | 1.4950 | 0.8750 | 0.4454 | 0.075 |
| 1.4743 | 8.4 | 210 | 1.4910 | 0.8750 | 0.4454 | 0.065 |
| 1.4746 | 8.8 | 220 | 1.4890 | 0.8750 | 0.4454 | 0.055 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.14.5
- Tokenizers 0.15.2
|
vincentoh/llama3_70b_no_robot_fsdp_qlora | vincentoh | 2024-05-14T20:11:01Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T17:20:00Z | ---
license: apache-2.0
---
```
#llama3_70b
#trained no_robot dataset
#using fsdp_qlora 8 GPU cluster
#git clone https://github.com/bigsnarfdude/fsdp-qlora
import transformers
import torch
model_id = "vincentoh/llama3_70b_no_robot_fsdp_qlora"
pipeline = transformers.pipeline("text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto")
pipeline("Why is the sky blue?")
``` |
tsavage68/Transaminitis_L3_1000steps_1e6rate_05beta_CSFTDPO | tsavage68 | 2024-05-14T20:08:43Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Transaminitis_L3_1000rate_1e7_SFT",
"base_model:finetune:tsavage68/Transaminitis_L3_1000rate_1e7_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T20:02:30Z | ---
license: llama3
base_model: tsavage68/Transaminitis_L3_1000rate_1e7_SFT
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Transaminitis_L3_1000steps_1e6rate_05beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Transaminitis_L3_1000steps_1e6rate_05beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Transaminitis_L3_1000rate_1e7_SFT](https://huggingface.co/tsavage68/Transaminitis_L3_1000rate_1e7_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rewards/chosen: 5.4974
- Rewards/rejected: -7.9781
- Rewards/accuracies: 1.0
- Rewards/margins: 13.4754
- Logps/rejected: -34.5108
- Logps/chosen: -7.5395
- Logits/rejected: -0.9869
- Logits/chosen: -0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7055 | 0.2 | 25 | 0.6877 | -0.5880 | -0.6760 | 0.5400 | 0.0880 | -19.9066 | -19.7102 | -1.0695 | -1.0683 |
| 0.687 | 0.4 | 50 | 0.5510 | 3.8005 | 3.0720 | 0.6000 | 0.7284 | -12.4107 | -10.9333 | -1.0238 | -1.0226 |
| 0.4531 | 0.6 | 75 | 0.1502 | 4.9713 | -4.7070 | 0.9800 | 9.6782 | -27.9686 | -8.5917 | -1.0329 | -1.0318 |
| 0.55 | 0.8 | 100 | 0.0005 | 4.5705 | -5.2643 | 1.0 | 9.8348 | -29.0833 | -9.3932 | -0.9625 | -0.9598 |
| 0.0 | 1.0 | 125 | 0.0000 | 4.8044 | -5.8180 | 1.0 | 10.6224 | -30.1907 | -8.9255 | -0.9703 | -0.9647 |
| 0.1601 | 1.2 | 150 | 0.0000 | 5.0462 | -5.7018 | 1.0 | 10.7480 | -29.9584 | -8.4419 | -0.9626 | -0.9538 |
| 0.0 | 1.4 | 175 | 0.0000 | 5.1329 | -6.1463 | 1.0 | 11.2792 | -30.8473 | -8.2684 | -0.9678 | -0.9570 |
| 0.0 | 1.6 | 200 | 0.0000 | 5.1779 | -6.3913 | 1.0 | 11.5693 | -31.3374 | -8.1783 | -0.9704 | -0.9584 |
| 0.0 | 1.8 | 225 | 0.0000 | 5.2127 | -6.6164 | 1.0 | 11.8290 | -31.7874 | -8.1089 | -0.9734 | -0.9603 |
| 0.0 | 2.0 | 250 | 0.0000 | 5.2438 | -6.7691 | 1.0 | 12.0129 | -32.0928 | -8.0465 | -0.9748 | -0.9610 |
| 0.0 | 2.2 | 275 | 0.0000 | 5.2700 | -6.9223 | 1.0 | 12.1924 | -32.3994 | -7.9942 | -0.9773 | -0.9626 |
| 0.0 | 2.4 | 300 | 0.0000 | 5.3046 | -7.0559 | 1.0 | 12.3605 | -32.6664 | -7.9249 | -0.9774 | -0.9620 |
| 0.0 | 2.6 | 325 | 0.0000 | 5.3317 | -7.1477 | 1.0 | 12.4794 | -32.8500 | -7.8708 | -0.9791 | -0.9634 |
| 0.0 | 2.8 | 350 | 0.0000 | 5.3455 | -7.2371 | 1.0 | 12.5826 | -33.0289 | -7.8432 | -0.9799 | -0.9635 |
| 0.0 | 3.0 | 375 | 0.0000 | 5.3670 | -7.3218 | 1.0 | 12.6888 | -33.1983 | -7.8002 | -0.9807 | -0.9638 |
| 0.0 | 3.2 | 400 | 0.0000 | 5.3762 | -7.4179 | 1.0 | 12.7941 | -33.3904 | -7.7818 | -0.9806 | -0.9632 |
| 0.0 | 3.4 | 425 | 0.0000 | 5.4071 | -7.4781 | 1.0 | 12.8852 | -33.5110 | -7.7200 | -0.9825 | -0.9651 |
| 0.0 | 3.6 | 450 | 0.0000 | 5.4188 | -7.5434 | 1.0 | 12.9622 | -33.6416 | -7.6966 | -0.9835 | -0.9655 |
| 0.0 | 3.8 | 475 | 0.0000 | 5.4263 | -7.6044 | 1.0 | 13.0307 | -33.7634 | -7.6816 | -0.9837 | -0.9655 |
| 0.0 | 4.0 | 500 | 0.0000 | 5.4259 | -7.6600 | 1.0 | 13.0859 | -33.8747 | -7.6824 | -0.9841 | -0.9656 |
| 0.0 | 4.2 | 525 | 0.0000 | 5.4441 | -7.7026 | 1.0 | 13.1467 | -33.9600 | -7.6460 | -0.9840 | -0.9652 |
| 0.0 | 4.4 | 550 | 0.0000 | 5.4553 | -7.7278 | 1.0 | 13.1832 | -34.0104 | -7.6235 | -0.9854 | -0.9666 |
| 0.0 | 4.6 | 575 | 0.0000 | 5.4630 | -7.7823 | 1.0 | 13.2453 | -34.1192 | -7.6081 | -0.9852 | -0.9662 |
| 0.0 | 4.8 | 600 | 0.0000 | 5.4630 | -7.8281 | 1.0 | 13.2911 | -34.2109 | -7.6082 | -0.9865 | -0.9673 |
| 0.0 | 5.0 | 625 | 0.0000 | 5.4702 | -7.8531 | 1.0 | 13.3233 | -34.2609 | -7.5939 | -0.9865 | -0.9672 |
| 0.0 | 5.2 | 650 | 0.0000 | 5.4827 | -7.8764 | 1.0 | 13.3591 | -34.3075 | -7.5687 | -0.9853 | -0.9659 |
| 0.0 | 5.4 | 675 | 0.0000 | 5.4842 | -7.9006 | 1.0 | 13.3848 | -34.3559 | -7.5659 | -0.9859 | -0.9665 |
| 0.0 | 5.6 | 700 | 0.0000 | 5.4900 | -7.9155 | 1.0 | 13.4055 | -34.3857 | -7.5543 | -0.9864 | -0.9669 |
| 0.0 | 5.8 | 725 | 0.0000 | 5.4865 | -7.9426 | 1.0 | 13.4291 | -34.4398 | -7.5612 | -0.9860 | -0.9664 |
| 0.0 | 6.0 | 750 | 0.0000 | 5.4953 | -7.9503 | 1.0 | 13.4455 | -34.4552 | -7.5437 | -0.9859 | -0.9663 |
| 0.0 | 6.2 | 775 | 0.0000 | 5.4917 | -7.9644 | 1.0 | 13.4561 | -34.4836 | -7.5509 | -0.9859 | -0.9663 |
| 0.0 | 6.4 | 800 | 0.0000 | 5.5003 | -7.9640 | 1.0 | 13.4642 | -34.4826 | -7.5337 | -0.9853 | -0.9657 |
| 0.0 | 6.6 | 825 | 0.0000 | 5.4953 | -7.9776 | 1.0 | 13.4729 | -34.5099 | -7.5436 | -0.9867 | -0.9670 |
| 0.0 | 6.8 | 850 | 0.0000 | 5.4915 | -7.9747 | 1.0 | 13.4662 | -34.5041 | -7.5513 | -0.9870 | -0.9673 |
| 0.0 | 7.0 | 875 | 0.0000 | 5.4933 | -7.9815 | 1.0 | 13.4748 | -34.5177 | -7.5476 | -0.9870 | -0.9675 |
| 0.0 | 7.2 | 900 | 0.0000 | 5.4929 | -7.9862 | 1.0 | 13.4790 | -34.5270 | -7.5485 | -0.9873 | -0.9675 |
| 0.0 | 7.4 | 925 | 0.0000 | 5.4931 | -7.9774 | 1.0 | 13.4705 | -34.5095 | -7.5480 | -0.9870 | -0.9673 |
| 0.0 | 7.6 | 950 | 0.0000 | 5.4967 | -7.9805 | 1.0 | 13.4772 | -34.5156 | -7.5408 | -0.9869 | -0.9672 |
| 0.0 | 7.8 | 975 | 0.0000 | 5.4974 | -7.9781 | 1.0 | 13.4754 | -34.5108 | -7.5395 | -0.9869 | -0.9672 |
| 0.0 | 8.0 | 1000 | 0.0000 | 5.4974 | -7.9781 | 1.0 | 13.4754 | -34.5108 | -7.5395 | -0.9869 | -0.9672 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
baebee/12352132ganaco | baebee | 2024-05-14T20:08:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-14T20:08:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
snigdhachandan/ganeet-V6 | snigdhachandan | 2024-05-14T19:57:09Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"WizardLMTeam/WizardMath-7B-V1.1",
"meta-math/MetaMath-Mistral-7B",
"base_model:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:merge:WizardLMTeam/WizardMath-7B-V1.1",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:merge:meta-math/MetaMath-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T19:54:40Z | ---
tags:
- merge
- mergekit
- lazymergekit
- WizardLMTeam/WizardMath-7B-V1.1
- meta-math/MetaMath-Mistral-7B
base_model:
- WizardLMTeam/WizardMath-7B-V1.1
- meta-math/MetaMath-Mistral-7B
---
# ganeet-V6
ganeet-V6 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [WizardLMTeam/WizardMath-7B-V1.1](https://huggingface.co/WizardLMTeam/WizardMath-7B-V1.1)
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: WizardLMTeam/WizardMath-7B-V1.1
parameters:
density: 0.5 # fraction of weights in differences from the base model to retain
weight: # weight gradient
- filter: mlp
value: 0.5
- value: 0
- model: upaya07/Arithmo2-Mistral-7B
- model: meta-math/MetaMath-Mistral-7B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: upaya07/Arithmo2-Mistral-7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "snigdhachandan/ganeet-V6"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Abin7/6-lang | Abin7 | 2024-05-14T19:56:25Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:Abin7/3-lang",
"base_model:merge:Abin7/3-lang",
"base_model:Abin7/finnish-mal-spanish",
"base_model:merge:Abin7/finnish-mal-spanish",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:merge:NousResearch/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T19:53:35Z | ---
base_model:
- NousResearch/Llama-2-7b-hf
- Abin7/3-lang
- Abin7/finnish-mal-spanish
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) as a base.
### Models Merged
The following models were included in the merge:
* [Abin7/3-lang](https://huggingface.co/Abin7/3-lang)
* [Abin7/finnish-mal-spanish](https://huggingface.co/Abin7/finnish-mal-spanish)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Abin7/3-lang
parameters:
density: 0.5
weight: 0.5
- model: Abin7/finnish-mal-spanish
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: NousResearch/Llama-2-7b-hf
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
MartialTerran/GPTs_by_MLP-to-KAN-Transform | MartialTerran | 2024-05-14T19:54:59Z | 0 | 4 | null | [
"region:us"
] | null | 2024-05-14T18:00:46Z | [Re-written by Google Gemini 1.5 Pro (AIstudio) (while I watched Google I/O 2024 KeyNote Speeches)]:
KAN-based Compression of Pretrained GPT Models
This project explores the potential of compressing pretrained GPT models like phi-1.5 ( https://huggingface.co/microsoft/phi-1_5 ) by replacing their Multilayer Perceptrons (MLPs) with equivalent Kolmogorov-Arnold (KAN) networks.
This idea stems from discussions like this Reddit thread (https://www.reddit.com/r/MachineLearning/comments/1clcu5i/d_kolmogorovarnold_network_is_just_an_mlp/ )
and explorations like this Colab notebook (https://colab.research.google.com/drive/1v3AHz5J3gk-vu4biESubJdOsUheycJNz#scrollTo=-KtMq2NYAy57 ), which suggest that KANs can be represented as MLPs with specific structures.
Hypothesis: KAN implementations can potentially achieve a smaller parameter count compared to MLPs, especially when leveraging efficient analog circuits for approximating KAN activation functions (sine, cosine, ln, log).
Motivation:
Reducing the parameter count of large language models like GPT can lead to significant memory savings and potentially faster inference.
Analog circuits, particularly those specialized for functions like sine and cosine (e.g., Mythic and IBM analog MatMul circuits), offer a promising avenue for low-power computation of KAN activations.
Roadmap:
Reverse KAN-to-MLP Algorithm: Develop a script that analyzes the weights and biases of a pretrained MLP and approximates them using a combination of KAN activation functions. This process can be seen as analogous to Fourier Transform Analysis, where a signal is decomposed into its constituent frequencies (represented by sine and cosine waves in this context).
Analog Circuit Integration: Design and implement analog/mixed-signal circuits to efficiently compute the KAN activation functions (sine, cosine, ln, log) from the input values. This step aims to leverage the inherent energy efficiency of analog computation.
Layer-by-Layer Substitution: Replace MLPs within the pretrained GPT model with the equivalent KAN networks, starting with one layer at a time.
Evaluation and Fine-tuning: Thoroughly evaluate the performance of the compressed GPT model and fine-tune it [or use adapters or low-rank parameter-efficient fine tuning PEFT] to recover any potential performance loss due to the approximation.
Preliminary Work:
Initial Python scripts have been developed to demonstrate the conversion between KAN and MLP layers:
[KAN is just MLP.ipynb](KAN is just MLP.ipynb - Colab.pdf)
[Python script converting KAN to MLP](code snippet for converting KAN to MLP)
Future Directions:
Develop the reverse KAN-to-MLP algorithm and implement it in Python.
Explore and compare various analog circuit designs for efficient KAN activation computation.
Conduct extensive experiments to measure the compression ratio, performance, and energy efficiency of KAN-based GPT models.
Contribution: Google Gemini 1.5 Pro has been instrumental in developing the initial draft Python scripts.
[My hand-typed-Draft, with prompt" "Rewrite and correct this model card in huggingface:" ]
Based on the paper at https://colab.research.google.com/drive/1v3AHz5J3gk-vu4biESubJdOsUheycJNz#scrollTo=-KtMq2NYAy57
and the discussion at https://www.reddit.com/r/MachineLearning/comments/1clcu5i/d_kolmogorovarnold_network_is_just_an_mlp/
I propose further comrpessing GPTs (such as https://huggingface.co/microsoft/phi-1_5 ) by converting their pretrained MLPs (multilevel pereceptrons) into equivalent KAN networks
based on an algorthym that reads the weights and biases in the pretrained MLP and approximates the MLP by use of analog functions
within a KAN network (the eplacement layer). Supposedly, and intuitively, a KAN implementation can have fewer variable parameters than a MLP.
But, this assumes that there is an effecient manner of generating the output of the KAN activation functions (sine, cosine, ln, Log).
I will assume that the KAN functions can be selected based on their being implemented using analog circuits. (see e.g., Mythic and IBBM analog MatMul circuits).
Here is a roadmap for further compressing pretrained GPTs including MLPs (multilevel perceptrons):
Write a reverse KAN-to-MLP algorthym script. (This will be similar to a Fourier Transform Analysis: Given a MLP having a set of weights and Bias, derive a set of functions (e.g., sine waves, cosine waves, known to KAN networks) that represents the original MLP parameters.
Use analog/mixed circuits to convert level inputs to the corresponding (KAN) sine(x) and (Cosine(x) values, to reduce power consumption.
Input the resulting values to the next GPT level, and repeat MLP to KAN approximation.
Google Gemini 1.5 Pro helped me develope the following draft python scripts to perform this layer-by-layer substitution/compression:
KAN is just MLP.ipynb - Colab.pdf
805 Tokens
Write a python script that converts KAN to MLPs based on this tutorial
Model
import torch
import torch.nn as nn
class KANLayer(nn.Module):
def __init__(self, in_features, out_features, grid_size):
super(KANLayer, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.grid_size = grid_size
self.linear = nn.Linear(in_features * grid_size, out_features)
self.shifts = torch.linspace(-1, 1, grid_size).reshape(1, grid_size, 1)
def forward(self, x):
repeated = x.unsqueeze(1).repeat(1, self.grid_size, 1)
shifted = repeated + self.shifts
intermediate = torch.cat([shifted[:, :1, :],
torch.relu(shifted[:, 1:, :])], dim=1).flatten(1)
outputs = self.linear(intermediate)
return outputs
class KANtoMLP(nn.Module):
def __init__(self, in_features, hidden_features, out_features, grid_size):
super(KANtoMLP, self).__init__()
# Input expansion is implicitly handled in the first KANLayer
self.layers = nn.ModuleList([
KANLayer(in_features, hidden_features, grid_size),
KANLayer(hidden_features, out_features, grid_size)
])
def forward(self, x):
for layer in self.layers:
x = layer(x)
return x
# Example usage
in_features = 10
hidden_features = 20
out_features = 5
grid_size = 3
kan_to_mlp = KANtoMLP(in_features, hidden_features, out_features, grid_size)
# Input data
batch_size = 16
input_data = torch.randn(batch_size, in_features)
# Forward pass
output = kan_to_mlp(input_data)
# Print output shape
print(output.shape)
Use code with caution.
Python
Explanation:
KANLayer Class:
This class implements a single layer of the KAN network, converting it to an equivalent MLP structure.
__init__: Initializes the linear layer (self.linear) and the shifts (self.shifts) for the piecewise linear function.
forward:
Expands the input (repeated) and adds shifts (shifted).
Applies ReLU to shifted values (except the first one) and concatenates with the first shifted value.
Flattens the intermediate result and passes it through the linear layer.
KANtoMLP Class:
This class represents the overall network, stacking multiple KANLayer instances.
__init__: Initializes a list of KANLayer objects (self.layers).
forward: Iterates through the layers, applying each one to the input sequentially.
Example Usage:
Creates an instance of KANtoMLP with specified input, hidden, output features, and grid size.
Generates random input data.
Performs a forward pass through the network.
Prints the shape of the output tensor.
This script demonstrates the conversion of a two-layer KAN network to its MLP equivalent. The number of layers and their configurations can be adjusted to match the desired architecture.
User
Now, write a Python Script to convert the weights and biases in a MLP in one level of a pretrained GPT model into an equivalent KAN layer,
for functioning substitution of the KAN layer into the pretrained GPT model.
Model
import torch
import torch.nn as nn
def mlp_to_kan_layer(mlp_layer, grid_size):
"""
Converts a single MLP layer (Linear-ReLU-Linear) to an equivalent KAN layer.
Args:
mlp_layer: The MLP layer to convert (nn.Sequential containing Linear-ReLU-Linear).
grid_size: The desired grid size for the piecewise linear function in the KAN layer.
Returns:
A KANLayer object with weights and biases initialized from the MLP layer.
"""
linear1, relu, linear2 = mlp_layer[0], mlp_layer[1], mlp_layer[2]
in_features = linear1.in_features
out_features = linear2.out_features
# Initialize KAN layer
kan_layer = KANLayer(in_features, out_features, grid_size)
# Transfer weights and biases
with torch.no_grad():
# Linear1 weights are split across the grid points in KAN
kan_layer.linear.weight.data[:, :in_features] = linear1.weight.data
for i in range(1, grid_size):
kan_layer.linear.weight.data[:, i * in_features:(i + 1) * in_features] = linear1.weight.data
# Linear2 weights are directly transferred to KAN
kan_layer.linear.bias.data = linear2.bias.data
return kan_layer
class KANLayer(nn.Module):
# ... (Same as in previous response)
# Example Usage:
# Assuming 'gpt_model' is your pretrained GPT model
# Select the MLP layer to convert (e.g., from a specific transformer block)
mlp_layer = gpt_model.transformer.h[0].mlp # Replace with the actual path
# Convert to KAN layer
grid_size = 5 # Choose your grid size
kan_layer = mlp_to_kan_layer(mlp_layer, grid_size)
# Substitute the MLP layer with the KAN layer
gpt_model.transformer.h[0].mlp = kan_layer
Use code with caution.
Python
Explanation:
mlp_to_kan_layer Function:
Takes an mlp_layer (assumed to be a nn.Sequential with Linear-ReLU-Linear) and grid_size as input.
Extracts the individual linear layers and ReLU activation from mlp_layer.
Creates a KANLayer instance with the appropriate input and output dimensions.
Weight Transfer:
The weights of the first linear layer (linear1) in the MLP are split and copied to the kan_layer.linear.weight across the grid points.
The weights of the second linear layer (linear2) are directly copied to the kan_layer.linear.weight.
Bias Transfer:
The bias of the second linear layer (linear2) is copied to kan_layer.linear.bias.
Returns the initialized kan_layer.
Example Usage:
Assumes you have a pretrained GPT model (gpt_model).
You need to identify the specific MLP layer you want to replace. The example shows a placeholder path; you should replace it with the actual path to your target MLP layer within the GPT model.
The mlp_to_kan_layer function is called to convert the chosen MLP layer to a KAN layer.
Finally, the original MLP layer within the GPT model is replaced with the newly created kan_layer.
Important Considerations:
Grid Size: The choice of grid_size will influence the representational capacity of the KAN layer.
Model Behavior: After replacing the MLP layer with a KAN layer, you should carefully evaluate the performance of the modified GPT model to ensure that the substitution does not negatively impact its behavior. You may need to fine-tune the model after the modification.
########### EXAMPLE IMPLEMENTATION FOR GPT-2 ##################
Based on Quantization-Aware pretraining methods, write a python script that performs updating of wights and bias in a GPT-2 124M model
with backpropagation with the additional constraint that weights and bias updates are steered not only based on gradient descent but
are also steered in a manner that tend to reduce the MLP-to-KAN conversion error. Thus, perform KAN-aware pretraining, and simultaneously generate a parameter-efficient PeFT adapter to correct the MLP-to-KAN error.
Model
import torch
import torch.nn as nn
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Assuming you have the mlp_to_kan_layer function defined
# from previous responses
# Load pretrained GPT-2 model and tokenizer
model_name = 'gpt2'
model = GPT2LMHeadModel.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
# Define KAN grid size
grid_size = 5
# Create a list to store PeFT adapters for each layer
peft_adapters = []
# Modify GPT-2 layers for KAN-aware pretraining
for i, layer in enumerate(model.transformer.h):
# Convert MLP to KAN
kan_layer = mlp_to_kan_layer(layer.mlp, grid_size)
layer.mlp = kan_layer
# Create a PeFT adapter for this layer
adapter = nn.Sequential(
nn.Linear(kan_layer.out_features, kan_layer.out_features),
nn.ReLU()
)
peft_adapters.append(adapter)
# Add adapter to the model (you might need to adjust
# how you integrate it into the forward pass)
layer.mlp = nn.Sequential(kan_layer, adapter)
# Define optimizer and loss function
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
loss_fn = nn.CrossEntropyLoss()
# KAN-Aware Pretraining Loop
def train(model, optimizer, loss_fn, epochs=10):
for epoch in range(epochs):
model.train()
for batch in train_dataloader: # Assuming you have a train_dataloader
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
optimizer.zero_grad()
# Forward pass through the modified GPT-2
outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs.loss
# Backward pass and calculate gradients
loss.backward()
# Custom gradient modification for KAN-awareness:
# 1. Calculate MLP-to-KAN conversion error (You need to define this function)
# conversion_error = calculate_mlp_to_kan_error(kan_layer, original_mlp_layer)
# 2. Add a term to the gradients of the KAN layer weights
# that minimizes conversion_error
# Update model parameters
optimizer.step()
# Training the model
train(model, optimizer, loss_fn)
# Save the fine-tuned model with PeFT adapters
model.save_pretrained("gpt2_kan_pretrained")
torch.save(peft_adapters, "peft_adapters.pt")
Use code with caution.
Python
Explanation:
Model Loading and Modification:
Loads a pretrained GPT-2 model.
Iterates through the transformer layers, converting each MLP to a KAN layer using the mlp_to_kan_layer function.
Creates a PeFT adapter (a small neural network) for each layer to correct for potential MLP-to-KAN conversion errors.
Integrates the KAN layers and adapters into the GPT-2 model.
Training Loop:
Implements a standard training loop with a defined loss function and optimizer.
Key Addition: In the backward pass, you need to calculate the MLP-to-KAN conversion error. This will involve comparing the output of the original MLP layer with the output of the KAN layer using the same input.
Modify the gradients of the KAN layer weights by adding a term that encourages minimizing the conversion error. The exact implementation of this gradient modification will depend on how you define the conversion error and the desired balance between gradient descent and error minimization.
PeFT Adapter Training:
The PeFT adapters are trained simultaneously with the KAN layers during the backpropagation process. Their role is to learn and correct any discrepancies between the original MLP behavior and the KAN layer's output.
Saving the Model:
Saves the fine-tuned model, which now incorporates the KAN layers and PeFT adapters, allowing for efficient and accurate inference.
Crucial Implementation Details:
calculate_mlp_to_kan_error: You need to define a function to measure the difference between the outputs of the original MLP layer and the KAN layer for a given input. The choice of error metric (e.g., MSE, KL-divergence) will impact the training process.
Gradient Modification: The exact way you modify the gradients to incorporate the KAN conversion error will be crucial to the success of the KAN-aware pretraining. You need to balance the influence of the original gradient descent with the pressure to minimize conversion error.
This script outlines the general structure and key considerations for KAN-aware pretraining. The specific implementations of error calculation and gradient modification will require careful design and experimentation to achieve optimal results.
---
license: Experimental and Academic Implementations are allowed. Commercial implementations and uses are subject to Royalty and Patent Rights of the Inventor Martial Terran.
License and Restrictions on Use
This project and the associated code, algorithms, and concepts are subject to the following terms and conditions:
1. Experimental and Academic Use:
You are granted a non-exclusive, royalty-free license to use, modify, and distribute this project solely for experimental and academic purposes. This includes, but is not limited to:
Research and development.
Educational activities.
Personal, non-commercial projects.
2. Commercial Use:
Any commercial use of this project, including but not limited to:
Incorporation into commercial products or services.
Distribution or sale for profit.
Use in any revenue-generating activity.
is strictly prohibited without the express written permission of the inventor, Martial Terran, and is subject to royalty payments and the enforcement of all applicable patent rights held by Martial Terran.
3. Patent Rights:
Martial Terran retains all patent rights, including but not limited to any patents issued or pending, relating to the concepts, algorithms, and implementations described in this project.
4. No Warranty:
This project is provided "as is" without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the inventor be liable for any claim, damages, or other liability arising from, out of, or in connection with this project.
5. Contact for Commercial Licensing:
To inquire about commercial licensing opportunities, please contact Martial Terran at [[email protected]].
6. Attribution:
All use of this project must include appropriate attribution to the inventor, Martial Terran.
7. Governing Law:
This agreement shall be governed by and construed in accordance with the laws of [New York].
---
|
JoseBambora/mistral_retrained | JoseBambora | 2024-05-14T19:49:05Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-12T22:43:22Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral_retrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_retrained
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0199 | 1.0 | 70 | 2.1161 |
| 1.9803 | 2.0 | 140 | 2.1242 |
| 1.8834 | 3.0 | 210 | 2.1538 |
| 1.8275 | 4.0 | 280 | 2.1674 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.1
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2 |
FallenMerick/Iced-Lemon-Cookie-7B | FallenMerick | 2024-05-14T19:43:28Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"roleplay",
"conversational",
"en",
"arxiv:2306.01708",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:merge:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:Undi95/BigL-7B",
"base_model:merge:Undi95/BigL-7B",
"base_model:icefog72/IceLemonTeaRP-32k-7b",
"base_model:merge:icefog72/IceLemonTeaRP-32k-7b",
"base_model:saishf/Multi-Verse-RP-7B",
"base_model:merge:saishf/Multi-Verse-RP-7B",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-09T01:19:07Z | ---
license: cc-by-4.0
language:
- en
base_model:
- Undi95/BigL-7B
- saishf/Multi-Verse-RP-7B
- KatyTheCutie/LemonadeRP-4.5.3
- icefog72/IceLemonTeaRP-32k-7b
- SanjiWatsuki/Kunoichi-DPO-v2-7B
library_name: transformers
tags:
- mergekit
- merge
- mistral
- text-generation
- roleplay
---
# Iced Lemon Cookie
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
GGUF quants: https://huggingface.co/FaradayDotDev/Iced-Lemon-Cookie-7B-GGUF
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [saishf/Multi-Verse-RP-7B](https://huggingface.co/saishf/Multi-Verse-RP-7B) as a base.
### Models Merged
The following models were included in the merge:
* [Undi95/BigL-7B](https://huggingface.co/Undi95/BigL-7B)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
* [icefog72/IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: icefog72/IceLemonTeaRP-32k-7b
parameters:
density: 1.0
weight: 1.0
- model: Undi95/BigL-7B
parameters:
density: 0.4
weight: 1.0
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
density: 0.6
weight: 1.0
- model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
density: 0.8
weight: 1.0
merge_method: ties
base_model: saishf/Multi-Verse-RP-7B
parameters:
normalize: true
dtype: float16
```
|
DUAL-GPO/phi-2-gpo-newSFT-b0.001-renew-30k-i1 | DUAL-GPO | 2024-05-14T19:37:32Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-gpo-new-i0",
"base_model:adapter:DUAL-GPO/phi-2-gpo-new-i0",
"license:mit",
"region:us"
] | null | 2024-05-14T11:52:29Z | ---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: DUAL-GPO/phi-2-gpo-new-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-newSFT-b0.001-renew-30k-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-newSFT-b0.001-renew-30k-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-new-i0](https://huggingface.co/DUAL-GPO/phi-2-gpo-new-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2 |
fine-tuned/dutch-legal-c | fine-tuned | 2024-05-14T19:34:33Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Law",
"Legal",
"Documents",
"Youth",
"Environment",
"custom_code",
"en",
"dataset:fine-tuned/dutch-legal-c",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-14T14:07:49Z | ---
license: apache-2.0
datasets:
- fine-tuned/dutch-legal-c
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Law
- Legal
- Documents
- Youth
- Environment
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
Legal document search
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/dutch-legal-c',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
Abin7/finnish-mal-spanish | Abin7 | 2024-05-14T19:32:33Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:Finnish-NLP/llama-7b-finnish-instruct-v0.2",
"base_model:merge:Finnish-NLP/llama-7b-finnish-instruct-v0.2",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:merge:NousResearch/Llama-2-7b-hf",
"base_model:abhinand/malayalam-llama-7b-instruct-v0.1",
"base_model:merge:abhinand/malayalam-llama-7b-instruct-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T19:28:50Z | ---
base_model:
- kukedlc/llama-7b-spanish
- abhinand/malayalam-llama-7b-instruct-v0.1
- Finnish-NLP/llama-7b-finnish-instruct-v0.2
- NousResearch/Llama-2-7b-hf
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) as a base.
### Models Merged
The following models were included in the merge:
* [kukedlc/llama-7b-spanish](https://huggingface.co/kukedlc/llama-7b-spanish)
* [abhinand/malayalam-llama-7b-instruct-v0.1](https://huggingface.co/abhinand/malayalam-llama-7b-instruct-v0.1)
* [Finnish-NLP/llama-7b-finnish-instruct-v0.2](https://huggingface.co/Finnish-NLP/llama-7b-finnish-instruct-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: abhinand/malayalam-llama-7b-instruct-v0.1
parameters:
density: 0.5
weight: 0.5
- model: Finnish-NLP/llama-7b-finnish-instruct-v0.2
parameters:
density: 0.5
weight: 0.5
- model: kukedlc/llama-7b-spanish
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: NousResearch/Llama-2-7b-hf
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
OsherElhadad/ppo-local1-PandaReachDense-v3 | OsherElhadad | 2024-05-14T19:31:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-14T18:42:26Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **PandaReachDense-v3**
This is a trained model of a **PPO** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jancuhel/google-siglip-large-patch16-384-img-text-relevancy | jancuhel | 2024-05-14T19:28:11Z | 95 | 1 | transformers | [
"transformers",
"safetensors",
"siglip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2024-05-13T20:19:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
paulo037/stable-code-instruct-3b-spider2-3500-steps | paulo037 | 2024-05-14T19:26:16Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T19:16:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
newsletter/Starling-LM-7B-beta-Q6_K-GGUF | newsletter | 2024-05-14T19:23:46Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"reward model",
"RLHF",
"RLAIF",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:berkeley-nest/Nectar",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T19:23:31Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
- llama-cpp
- gguf-my-repo
datasets:
- berkeley-nest/Nectar
---
# newsletter/Starling-LM-7B-beta-Q6_K-GGUF
This model was converted to GGUF format from [`Nexusflow/Starling-LM-7B-beta`](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo newsletter/Starling-LM-7B-beta-Q6_K-GGUF --model starling-lm-7b-beta.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo newsletter/Starling-LM-7B-beta-Q6_K-GGUF --model starling-lm-7b-beta.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m starling-lm-7b-beta.Q6_K.gguf -n 128
```
|
mradermacher/ContextClassy-Merged-V1-GGUF | mradermacher | 2024-05-14T19:22:35Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"LeetCode",
"Programming",
"Coding",
"en",
"base_model:Usaid/ContextClassy-Merged-V1",
"base_model:quantized:Usaid/ContextClassy-Merged-V1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-17T17:51:32Z | ---
base_model: Usaid/ContextClassy-Merged-V1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- LeetCode
- Programming
- Coding
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Usaid/ContextClassy-Merged-V1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ContextClassy-Merged-V1-GGUF/resolve/main/ContextClassy-Merged-V1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
saransh03sharma/mintrec2-mistral-2-7b-200-10 | saransh03sharma | 2024-05-14T19:21:37Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-14T19:15:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Boreas-7B-chat-GGUF | mradermacher | 2024-05-14T19:19:45Z | 94 | 0 | transformers | [
"transformers",
"gguf",
"finetuned",
"en",
"dataset:yhavinga/mc4_nl_cleaned",
"dataset:yhavinga/nedd_wiki_news",
"dataset:teknium/OpenHermes-2.5",
"dataset:euirim/goodwiki",
"dataset:philschmid/flanv2",
"base_model:yhavinga/Boreas-7B-chat",
"base_model:quantized:yhavinga/Boreas-7B-chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-14T18:53:11Z | ---
base_model: yhavinga/Boreas-7B-chat
datasets:
- yhavinga/mc4_nl_cleaned
- yhavinga/nedd_wiki_news
- teknium/OpenHermes-2.5
- euirim/goodwiki
- philschmid/flanv2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- finetuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/yhavinga/Boreas-7B-chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Boreas-7B-chat-GGUF/resolve/main/Boreas-7B-chat.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ramirces/mbart-neutralization | ramirces | 2024-05-14T19:18:33Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-04-24T05:41:28Z | ---
license: mit
base_model: facebook/mbart-large-50
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-neutralization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-neutralization
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0459
- Bleu: 8.565
- Gen Len: 20.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 16 | 3.7800 | 7.2466 | 18.8049 |
| No log | 2.0 | 32 | 3.0459 | 8.565 | 20.9268 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/Llama3-Aloe-8B-Alpha-GGUF | mradermacher | 2024-05-14T19:17:19Z | 113 | 2 | transformers | [
"transformers",
"gguf",
"biology",
"medical",
"en",
"dataset:argilla/dpo-mix-7k",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/airoboros-3.2",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:LDJnr/Capybara",
"dataset:HPAI-BSC/CareQA",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:lukaemon/mmlu",
"dataset:bigbio/pubmed_qa",
"dataset:openlifescienceai/medmcqa",
"dataset:bigbio/med_qa",
"dataset:HPAI-BSC/better-safe-than-sorry",
"dataset:HPAI-BSC/pubmedqa-cot",
"dataset:HPAI-BSC/medmcqa-cot",
"dataset:HPAI-BSC/medqa-cot",
"base_model:HPAI-BSC/Llama3-Aloe-8B-Alpha",
"base_model:quantized:HPAI-BSC/Llama3-Aloe-8B-Alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-26T20:49:47Z | ---
base_model: HPAI-BSC/Llama3-Aloe-8B-Alpha
datasets:
- argilla/dpo-mix-7k
- nvidia/HelpSteer
- jondurbin/airoboros-3.2
- hkust-nlp/deita-10k-v0
- LDJnr/Capybara
- HPAI-BSC/CareQA
- GBaker/MedQA-USMLE-4-options
- lukaemon/mmlu
- bigbio/pubmed_qa
- openlifescienceai/medmcqa
- bigbio/med_qa
- HPAI-BSC/better-safe-than-sorry
- HPAI-BSC/pubmedqa-cot
- HPAI-BSC/medmcqa-cot
- HPAI-BSC/medqa-cot
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- biology
- medical
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Subsets and Splits