modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dada22231/b7854b60-d08f-4bc6-91ef-cbc321950656
|
dada22231
| 2024-12-14T18:03:03Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b",
"base_model:adapter:unsloth/codegemma-7b",
"license:apache-2.0",
"region:us"
] | null | 2024-12-14T17:22:40Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b7854b60-d08f-4bc6-91ef-cbc321950656
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 4
dataset_prepared_path: null
datasets:
- data_files:
- 550eb38e31c50429_train_data.json
ds_type: json
format: custom
num_proc: 4
path: /workspace/input_data/550eb38e31c50429_train_data.json
streaming: true
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map:
lm_head: 3
model.embed_tokens: 0
model.layers.0: 0
model.layers.1: 0
model.layers.10: 3
model.layers.11: 3
model.layers.2: 0
model.layers.3: 1
model.layers.4: 1
model.layers.5: 1
model.layers.6: 2
model.layers.7: 2
model.layers.8: 2
model.layers.9: 3
model.norm: 3
do_eval: true
early_stopping_patience: 1
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: true
hub_model_id: dada22231/b7854b60-d08f-4bc6-91ef-cbc321950656
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 0.3
max_memory:
0: 60GB
1: 70GB
2: 70GB
3: 70GB
cpu: 96GB
max_steps: 50
micro_batch_size: 1
mixed_precision: bf16
mlflow_experiment_name: /tmp/550eb38e31c50429_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: false
torch_dtype: bfloat16
train_on_inputs: false
trust_remote_code: true
use_cache: false
val_set_size: 50
wandb_entity: null
wandb_mode: online
wandb_name: b7854b60-d08f-4bc6-91ef-cbc321950656
wandb_project: Public_TuningSN
wandb_runid: b7854b60-d08f-4bc6-91ef-cbc321950656
warmup_ratio: 0.05
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# b7854b60-d08f-4bc6-91ef-cbc321950656
This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8104 | 0.0030 | 1 | 3.5199 |
| 1.0253 | 0.0748 | 25 | 0.7736 |
| 0.9271 | 0.1496 | 50 | 0.7798 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
myselfrew/llama3_prompt_baseline_learn_from_70b_data_n4_filter_2e6_bz32_pack8192_2epoch
|
myselfrew
| 2024-12-14T17:59:44Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T17:56:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
myselfrew/llama3_8b_learn_from_70b_data_n4_filter_2e6_bz32_pack8192_3epoch
|
myselfrew
| 2024-12-14T17:58:21Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T17:55:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Noureddinesa/Invoices_NomicV1.5_2
|
Noureddinesa
| 2024-12-14T17:51:50Z | 5 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"nomic_bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1151",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"dataset:Noureddinesa/Invoices_embedding_3",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/nomic-embed-text-v1.5",
"base_model:finetune:nomic-ai/nomic-embed-text-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-12-14T17:51:13Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1151
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/nomic-embed-text-v1.5
widget:
- source_sentence: Une société d'importation de meubles paie 5 000 dirhams pour le
transport de marchandises importées par conteneur depuis le port de Tanger vers
son entrepôt à Rabat.
sentences:
- 'Les transports regroupent les frais liés au déplacement du personnel et au transport
des marchandises lors des achats et des ventes. Ces coûts s''ajoutent aux frais
d''acquisition d''immobilisations si le transport est lié à leur achat.
1. Frais de taxi pour des employés se rendant à une réunion. 2. Coût du transport
de marchandises depuis un entrepôt jusqu''à un client. 3. Dépenses de livraison
pour des produits achetés en gros. 4. Frais de transport pour un salon professionnel.
5. Coût d''envoi d''échantillons à des clients potentiels. 6. Remboursement des
frais de transport pour des déplacements professionnels. 7. Paiement pour le transport
de matériel lors d''un déménagement de bureau. 8. Coût de livraison d''une commande
en ligne. 9. Frais de transport pour des produits retournés par des clients. 10.
Dépenses liées au transport de fournitures de bureau. 11. Coût de transport pour
des marchandises importées. 12. Remboursement des frais de carburant pour des
trajets professionnels. 13. Paiement pour le transport de produits périssables
nécessitant une livraison rapide. 14. Frais de transport pour une exposition commerciale.
15. Coût du transport de personnel pour un voyage d''affaires.'
- 'Les terrains aménagés représentent des parcelles de terrain qui ont été préparées
pour des constructions ou d''autres utilisations. Cela inclut les travaux de nivellement,
de drainage ou d''infrastructure nécessaires avant le début d''un projet.
1. Un terrain de sport prêt à être utilisé après des travaux de nivellement et
de semis de gazon.
2. Un terrain résidentiel sur lequel des routes et des services publics ont été
installés.
3. Un espace commercial où les fondations ont été creusées et les accès routiers
sont en place.
4. Un parc public avec des sentiers et des aires de jeux, prêt à accueillir des
visiteurs.
5. Un terrain industriel qui a été préparé avec des accès pour les camions et
des installations électriques.
6. Une parcelle de terrain agricole où le drainage et les clôtures ont été mis
en place.
7. Un site de construction pour un immeuble d''habitation avec des infrastructures
de base installées.
8. Un terrain réhabilité pour être utilisé comme espace vert après des travaux
de nettoyage.
9. Un lotissement où les routes ont été pavées et les services d''eau et d''électricité
sont disponibles.
10. Un terrain pour un centre communautaire qui a été aménagé avec des parkings
et des accès piétonniers.
11. Une zone de loisirs où des sentiers de randonnée et des aires de pique-nique
ont été aménagés.
12. Un site touristique préparé avec des installations sanitaires et des points
d''accès.
13. Un terrain à bâtir sur lequel les anciens bâtiments ont été démolis et nettoyés.
14. Un espace pour un festival où des infrastructures temporaires comme des scènes
et des stands ont été mises en place.
15. Un terrain de camping où des emplacements ont été définis et des commodités
ont été installées.'
- 'L''achat de marchandises du groupe B désigne l''acquisition de biens destinés
à la revente, qui appartiennent à une catégorie spécifique de produits. Ces marchandises
sont généralement stockées avant d''être vendues à des clients.
1. Acheter des vêtements pour une boutique de mode. 2. Acquérir des livres pour
une librairie. 3. Commander des meubles pour un magasin de décoration. 4. Acheter
des jouets pour un magasin de jouets. 5. Se procurer des appareils électroniques
pour un revendeur. 6. Acquérir des produits alimentaires pour un supermarché.
7. Commander des articles de sport pour un magasin spécialisé. 8. Acheter des
cosmétiques pour une parfumerie. 9. Se procurer des fournitures de bureau pour
un commerce. 10. Acquérir des accessoires pour un magasin de téléphones. 11. Acheter
des produits de jardinage pour un centre de jardinage. 12. Commander des pièces
de rechange pour une entreprise de mécanique. 13. Se procurer des instruments
de musique pour un magasin de musique. 14. Acquérir des articles de bricolage
pour une quincaillerie. 15. Acheter des équipements de fitness pour un magasin
de sport.'
- source_sentence: Un terrain à bâtir a subi des travaux de démolition pour enlever
les anciens bâtiments. Le site a été nettoyé et est maintenant prêt pour la construction
de nouvelles structures, attirant les investisseurs intéressés.
sentences:
- 'La variation des stocks de marchandises représente la différence entre le stock
de marchandises au début et à la fin d''une période. Cela permet d''évaluer si
les stocks ont augmenté ou diminué au cours de cette période.
1. Une boutique a un stock initial de 100 t-shirts et un stock final de 80 t-shirts.
La variation est de -20 t-shirts.
2. Un supermarché commence avec 500 paquets de pâtes et finit avec 600. La variation
est de +100 paquets.
3. Un magasin de chaussures a 200 paires au début et 250 à la fin. La variation
est de +50 paires.
4. Une librairie démarre avec 300 livres et termine avec 250. La variation est
de -50 livres.
5. Une entreprise de décoration a 150 articles au début et 120 à la fin. La variation
est de -30 articles.
6. Un magasin de jouets commence avec 400 jouets et termine avec 500. La variation
est de +100 jouets.
7. Un restaurant a un stock de 200 bouteilles de vin au début et 150 à la fin.
La variation est de -50 bouteilles.
8. Une boulangerie commence avec 1000 pains et termine avec 900. La variation
est de -100 pains.
9. Un magasin de vêtements a 500 articles en stock au début et 550 à la fin. La
variation est de +50 articles.
10. Un garage automobile a 60 pneus au début et 50 à la fin. La variation est
de -10 pneus.
11. Une épicerie a un stock initial de 250 boîtes de conserve et finit avec 300.
La variation est de +50 boîtes.
12. Un magasin de meubles commence avec 80 meubles et termine avec 70. La variation
est de -10 meubles.
13. Une entreprise de cosmétiques débute avec 300 produits et finit avec 400.
La variation est de +100 produits.
14. Un magasin de sport a 100 ballons au début et 90 à la fin. La variation est
de -10 ballons.
15. Une bijouterie commence avec 200 bijoux et termine avec 250. La variation
est de +50 bijoux.'
- 'Les terrains aménagés représentent des parcelles de terrain qui ont été préparées
pour des constructions ou d''autres utilisations. Cela inclut les travaux de nivellement,
de drainage ou d''infrastructure nécessaires avant le début d''un projet.
1. Un terrain de sport prêt à être utilisé après des travaux de nivellement et
de semis de gazon.
2. Un terrain résidentiel sur lequel des routes et des services publics ont été
installés.
3. Un espace commercial où les fondations ont été creusées et les accès routiers
sont en place.
4. Un parc public avec des sentiers et des aires de jeux, prêt à accueillir des
visiteurs.
5. Un terrain industriel qui a été préparé avec des accès pour les camions et
des installations électriques.
6. Une parcelle de terrain agricole où le drainage et les clôtures ont été mis
en place.
7. Un site de construction pour un immeuble d''habitation avec des infrastructures
de base installées.
8. Un terrain réhabilité pour être utilisé comme espace vert après des travaux
de nettoyage.
9. Un lotissement où les routes ont été pavées et les services d''eau et d''électricité
sont disponibles.
10. Un terrain pour un centre communautaire qui a été aménagé avec des parkings
et des accès piétonniers.
11. Une zone de loisirs où des sentiers de randonnée et des aires de pique-nique
ont été aménagés.
12. Un site touristique préparé avec des installations sanitaires et des points
d''accès.
13. Un terrain à bâtir sur lequel les anciens bâtiments ont été démolis et nettoyés.
14. Un espace pour un festival où des infrastructures temporaires comme des scènes
et des stands ont été mises en place.
15. Un terrain de camping où des emplacements ont été définis et des commodités
ont été installées.'
- 'Les terrains nus désignent des parcelles de terre qui ne possèdent aucune construction.
Ils sont évalués en fonction de leur valeur d''acquisition.
1. Un terrain vierge acheté pour construire une maison. 2. Un parcelle de terre
non aménagée destinée à l''agriculture. 3. Un terrain nu en zone industrielle
prêt à accueillir des usines. 4. Une surface de terrain dans une zone résidentielle,
sans aucun bâtiment. 5. Un terrain dans une zone touristique, où aucun bâtiment
n''est encore érigé. 6. Un terrain situé à la périphérie d''une ville, sans construction.
7. Une parcelle de terre achetée pour y installer un centre commercial. 8. Un
terrain en zone rurale, sans aucune infrastructure. 9. Un terrain nu utilisé pour
des activités de loisirs comme le camping. 10. Un terrain à bâtir acheté par un
promoteur immobilier. 11. Un terrain en friche qui n''a jamais été construit.
12. Une terre destinée à la vente, sans aucune construction. 13. Un terrain de
sport non aménagé, comme un champ de football. 14. Un terrain nu dans une réserve
naturelle. 15. Un terrain à l''état brut, prêt à être développé.'
- source_sentence: Un entrepôt de distribution achète des réservoirs de stockage pour
liquides, d'une valeur de 30,000 dirhams, afin de mieux gérer les stocks de produits
chimiques et respecter les normes de sécurité.
sentences:
- 'Ce compte enregistre des installations techniques, matériels et outillages qui
ne sont pas classés dans d''autres catégories spécifiques.
1. Systèmes de chauffage et de climatisation dans un bâtiment.
2. Équipements de sécurité incendie comme les alarmes et les extincteurs.
3. Machines à café dans une salle de repos d''entreprise.
4. Systèmes de ventilation dans un atelier.
5. Éclairage industriel dans une usine.
6. Réservoirs de stockage pour liquides dans un entrepôt.
7. Équipements de laboratoire pour des tests scientifiques.
8. Outils de jardinage pour l''entretien des espaces verts.
9. Appareils de nettoyage industriel comme des nettoyeurs haute pression.
10. Équipements de télécommunication dans un bureau.
11. Installations de plomberie dans un bâtiment commercial.
12. Systèmes de contrôle d''accès pour sécurité des locaux.
13. Équipements de montage pour la production en usine.
14. Matériel d''impression pour les services de reprographie.
15. Outils de maintenance pour les réparations d''équipement.'
- 'La variation des stocks de marchandises représente la différence entre le stock
de marchandises au début et à la fin d''une période. Cela permet d''évaluer si
les stocks ont augmenté ou diminué au cours de cette période.
1. Une boutique a un stock initial de 100 t-shirts et un stock final de 80 t-shirts.
La variation est de -20 t-shirts.
2. Un supermarché commence avec 500 paquets de pâtes et finit avec 600. La variation
est de +100 paquets.
3. Un magasin de chaussures a 200 paires au début et 250 à la fin. La variation
est de +50 paires.
4. Une librairie démarre avec 300 livres et termine avec 250. La variation est
de -50 livres.
5. Une entreprise de décoration a 150 articles au début et 120 à la fin. La variation
est de -30 articles.
6. Un magasin de jouets commence avec 400 jouets et termine avec 500. La variation
est de +100 jouets.
7. Un restaurant a un stock de 200 bouteilles de vin au début et 150 à la fin.
La variation est de -50 bouteilles.
8. Une boulangerie commence avec 1000 pains et termine avec 900. La variation
est de -100 pains.
9. Un magasin de vêtements a 500 articles en stock au début et 550 à la fin. La
variation est de +50 articles.
10. Un garage automobile a 60 pneus au début et 50 à la fin. La variation est
de -10 pneus.
11. Une épicerie a un stock initial de 250 boîtes de conserve et finit avec 300.
La variation est de +50 boîtes.
12. Un magasin de meubles commence avec 80 meubles et termine avec 70. La variation
est de -10 meubles.
13. Une entreprise de cosmétiques débute avec 300 produits et finit avec 400.
La variation est de +100 produits.
14. Un magasin de sport a 100 ballons au début et 90 à la fin. La variation est
de -10 ballons.
15. Une bijouterie commence avec 200 bijoux et termine avec 250. La variation
est de +50 bijoux.'
- 'Les redevances pour brevets, marques et droits similaires sont des paiements
effectués par une entreprise pour utiliser des inventions, des marques ou d''autres
droits qui ne lui appartiennent pas. Cela inclut également les frais pour les
mises à jour de logiciels nécessaires à l''exploitation de l''entreprise.
1. Une entreprise de technologie paie des redevances pour utiliser un logiciel
protégé par un brevet. 2. Une marque de vêtements verse des redevances à un designer
pour l''utilisation de son logo. 3. Un fabricant de médicaments paie des droits
pour exploiter un brevet sur un nouveau traitement. 4. Une société de production
utilise une musique sous licence et paie des redevances à l''artiste. 5. Une entreprise
de jeux vidéo achète des droits pour utiliser un personnage emblématique d''un
film. 6. Un restaurant utilise une recette protégée et verse des frais au créateur
de celle-ci. 7. Un éditeur de livres paie des redevances pour utiliser une œuvre
protégée dans une anthologie. 8. Une société de publicité utilise une image protégée
et paie des droits au photographe. 9. Une compagnie de télécommunications paie
des redevances pour utiliser une technologie brevetée d''un concurrent. 10. Un
développeur d''applications paie pour intégrer une API protégée dans son logiciel.
11. Une entreprise de cosmétiques verse des redevances pour utiliser une formule
de produit brevetée. 12. Un producteur de films paie pour les droits d''adaptation
d''un roman à succès. 13. Une start-up utilise un logo d''une autre entreprise
sous licence et paie des frais en conséquence. 14. Un distributeur de jeux de
société verse des redevances pour utiliser un jeu protégé. 15. Un constructeur
automobile paie des droits pour utiliser un design de voiture protégé.'
- source_sentence: Un espace pour un marché hebdomadaire a été préparé avec des allées
et des installations pour les vendeurs, rendant le terrain prêt à accueillir des
commerçants et des visiteurs chaque semaine.
sentences:
- 'La variation des stocks de matières et fournitures représente la différence entre
le stock de départ et le stock de fin d''un exercice comptable. Elle permet de
mesurer l''augmentation ou la diminution des matières et fournitures utilisées
durant cette période.
1. Une entreprise commence l''année avec 1000 unités de matières premières et
finit avec 800, indiquant une diminution de 200 unités. 2. Un restaurant débute
avec 150 kg de légumes et termine avec 200 kg, montrant une augmentation de 50
kg. 3. Une usine de textile commence avec 300 mètres de tissu et finit avec 150
mètres, ce qui représente une diminution de 150 mètres. 4. Un magasin de bricolage
commence avec 500 rouleaux de papier peint et termine l''année avec 600, soit
une augmentation de 100 rouleaux. 5. Une société de construction débute avec 2000
clous et termine avec 1500, indiquant une diminution de 500 clous. 6. Un distributeur
de fournitures de bureau commence avec 300 paquets de papier et finit avec 350,
ce qui représente une augmentation de 50 paquets. 7. Un fabricant d''emballages
débute avec 1000 boîtes et finit avec 900, indiquant une diminution de 100 boîtes.
8. Une imprimerie commence l''année avec 2500 feuilles de papier et finit avec
3000 feuilles, montrant une augmentation de 500 feuilles. 9. Un atelier de fabrication
de meubles commence avec 800 planches de bois et termine avec 600, représentant
une diminution de 200 planches. 10. Une entreprise de produits électroniques débute
avec 700 composants et finit avec 800, indiquant une augmentation de 100 composants.
11. Un laboratoire commence avec 50 flacons de produits chimiques et termine avec
40, ce qui représente une diminution de 10 flacons. 12. Une société de nettoyage
commence avec 200 litres de produits et finit avec 250 litres, montrant une augmentation
de 50 litres. 13. Une pépinière débute avec 300 plants et termine avec 250, indiquant
une diminution de 50 plants. 14. Un fleuriste commence l''année avec 100 bouquets
de fleurs et termine avec 120, représentant une augmentation de 20 bouquets. 15.
Une brasserie débute avec 2000 litres de bière en stock et termine avec 1800 litres,
indiquant une diminution de 200 litres.'
- 'Les rabais, remises et ristournes sont des réductions accordées sur le prix d''achat
de marchandises, permettant d''économiser de l''argent lors de l''achat.
1. Un magasin offre un rabais de 20% sur une paire de chaussures à 100€, donc
le client paie 80€. 2. Lors d''une promotion, un livre coûtant 15€ bénéficie d''une
remise de 3€, le client le paie 12€. 3. Un fournisseur accorde une ristourne de
5% sur une commande de 1 000€, ce qui réduit le coût à 950€. 4. Un supermarché
applique une remise de 10% sur un panier de courses de 50€, le total s''élève
à 45€. 5. Un client fidèle reçoit un rabais de 10€ sur son prochain achat après
avoir dépensé 100€ dans une boutique. 6. Une entreprise achète des fournitures
de bureau et reçoit un rabais de 15% pour une commande supérieure à 200€. 7. Un
client achète une télévision à 800€ avec une remise de 100€, le prix final est
de 700€. 8. En fin de saison, un magasin de vêtements propose des remises allant
jusqu''à 50% sur les articles non vendus. 9. Un restaurant offre une remise de
20% sur le total de l''addition pour les groupes de plus de 10 personnes. 10.
Lors d''un salon, une entreprise accorde un rabais de 30% sur ses produits aux
clients qui s''inscrivent à sa newsletter. 11. Une boutique en ligne propose une
ristourne de 5€ sur une commande de 50€ ou plus. 12. Un grossiste offre une remise
de 10% aux clients qui paient comptant. 13. Un distributeur accorde un rabais
de 15% sur les produits en promotion pour attirer plus de clients. 14. Pendant
les soldes, un article à 200€ peut bénéficier d''une réduction de 40%, le vendant
à 160€. 15. Un club de loisirs offre une remise de 25% pour les nouveaux membres
sur leur première inscription.'
- 'Les terrains aménagés représentent des parcelles de terrain qui ont été préparées
pour des constructions ou d''autres utilisations. Cela inclut les travaux de nivellement,
de drainage ou d''infrastructure nécessaires avant le début d''un projet.
1. Un terrain de sport prêt à être utilisé après des travaux de nivellement et
de semis de gazon.
2. Un terrain résidentiel sur lequel des routes et des services publics ont été
installés.
3. Un espace commercial où les fondations ont été creusées et les accès routiers
sont en place.
4. Un parc public avec des sentiers et des aires de jeux, prêt à accueillir des
visiteurs.
5. Un terrain industriel qui a été préparé avec des accès pour les camions et
des installations électriques.
6. Une parcelle de terrain agricole où le drainage et les clôtures ont été mis
en place.
7. Un site de construction pour un immeuble d''habitation avec des infrastructures
de base installées.
8. Un terrain réhabilité pour être utilisé comme espace vert après des travaux
de nettoyage.
9. Un lotissement où les routes ont été pavées et les services d''eau et d''électricité
sont disponibles.
10. Un terrain pour un centre communautaire qui a été aménagé avec des parkings
et des accès piétonniers.
11. Une zone de loisirs où des sentiers de randonnée et des aires de pique-nique
ont été aménagés.
12. Un site touristique préparé avec des installations sanitaires et des points
d''accès.
13. Un terrain à bâtir sur lequel les anciens bâtiments ont été démolis et nettoyés.
14. Un espace pour un festival où des infrastructures temporaires comme des scènes
et des stands ont été mises en place.
15. Un terrain de camping où des emplacements ont été définis et des commodités
ont été installées.'
- source_sentence: Une société de téléphonie mobile, réalisant que ses anciens modèles
de téléphones ne se vendent plus, décide de provisionner 500 000 dirhams sur un
total de 3 millions de dirhams pour ces modèles obsolètes.
sentences:
- 'Les autres terrains désignent des parcelles de terrain qui ne sont pas classées
dans les catégories spécifiques mentionnées précédemment.
1. Un terrain agricole non cultivé. 2. Une parcelle de forêt. 3. Un terrain vacant
en milieu urbain. 4. Un terrain destiné à un futur développement immobilier. 5.
Un terrain de loisir comme un parc public. 6. Un terrain industriel non utilisé.
7. Un terrain de stationnement. 8. Un terrain sur lequel se trouve un ancien bâtiment
démoli. 9. Un terrain situé en zone inondable. 10. Un terrain attribué à des projets
communautaires. 11. Un terrain utilisé pour des événements temporaires (foires,
festivals). 12. Un terrain de camping. 13. Un terrain de golf. 14. Un terrain
en friche. 15. Un terrain de sport (stade, terrain de basket).'
- 'Le compte de provisions pour dépréciation des immobilisations enregistre les
pertes de valeur potentielles des biens durables de l''entreprise, qu''ils soient
matériels (comme des machines) ou immatériels (comme des logiciels).
1. Une entreprise constate que l''ordinateur utilisé depuis plusieurs années perd
de sa valeur et crée une provision pour cette dépréciation. 2. Une société immobilière
doit ajuster la valeur de ses bâtiments en raison d''une baisse du marché immobilier.
3. Un studio de design évalue la perte de valeur de ses équipements créatifs après
plusieurs années d''utilisation. 4. Une entreprise de transport met une provision
pour la dépréciation de ses camions vieillissants. 5. Un éditeur de logiciels
ajuste la valeur de sa propriété intellectuelle en raison de l''émergence de nouvelles
technologies. 6. Un constructeur automobile constate que certains modèles ne se
vendent plus bien et prépare une provision pour leur dépréciation. 7. Un restaurant
ajuste la valeur de son mobilier ancien qui a perdu de son attrait. 8. Une société
de production audiovisuelle prend en compte la dépréciation de ses équipements
de tournage. 9. Un cabinet médical observe que son matériel médical devient obsolète
et crée une provision en conséquence. 10. Une entreprise de construction ajuste
la valeur de ses machines après un certain temps d''utilisation. 11. Un musée
doit établir une provision pour la dépréciation de ses œuvres d''art moins prisées.
12. Une société de télécommunications évalue la baisse de valeur de ses antennes
anciennes. 13. Un club de sport met à jour la valeur de ses installations vieilles
de plusieurs décennies. 14. Un opérateur de location de voitures doit créer une
provision pour la dépréciation de son parc automobile. 15. Une entreprise de nettoyage
évalue la perte de valeur de ses équipements de nettoyage avec le temps.'
- 'Le matériel de transport désigne tous les véhicules et équipements utilisés pour
déplacer des personnes ou des marchandises, que ce soit par voie terrestre, aérienne
ou maritime. Cela inclut les moyens de transport affectés au tourisme ou à l''usage
du personnel d''une entreprise.
1. Un bus utilisé pour transporter des employés au travail. 2. Un camion de livraison
pour acheminer des marchandises. 3. Une voiture de société mise à disposition
d''un salarié. 4. Un bateau de croisière pour le tourisme. 5. Un avion de ligne
pour le transport de passagers. 6. Un train utilisé pour le transport de marchandises.
7. Un vélo de fonction pour les déplacements professionnels. 8. Un fourgon utilisé
pour des services de dépannage. 9. Un hélicoptère pour des missions d''urgence
ou de transport de personnes. 10. Un tramway utilisé pour les transports en commun.
11. Un ferry reliant deux rives pour le transport de véhicules. 12. Un autocar
pour des excursions touristiques. 13. Un taxi pour le transport de personnes.
14. Un véhicule utilitaire léger (VUL) pour des travaux sur site. 15. Un scooter
utilisé pour des livraisons rapides.'
datasets:
- Noureddinesa/Invoices_embedding_3
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on nomic-ai/nomic-embed-text-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: all nli test
type: all-nli-test
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
---
# SentenceTransformer based on nomic-ai/nomic-embed-text-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) on the [invoices_embedding_3](https://huggingface.co/datasets/Noureddinesa/Invoices_embedding_3) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) <!-- at revision d802ae16c9caed4d197895d27c6d529434cd8c6d -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [invoices_embedding_3](https://huggingface.co/datasets/Noureddinesa/Invoices_embedding_3)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Noureddinesa/Invoices_NomicV1.5_2")
# Run inference
sentences = [
'Une société de téléphonie mobile, réalisant que ses anciens modèles de téléphones ne se vendent plus, décide de provisionner 500 000 dirhams sur un total de 3 millions de dirhams pour ces modèles obsolètes.',
"Le compte de provisions pour dépréciation des immobilisations enregistre les pertes de valeur potentielles des biens durables de l'entreprise, qu'ils soient matériels (comme des machines) ou immatériels (comme des logiciels).\n\n1. Une entreprise constate que l'ordinateur utilisé depuis plusieurs années perd de sa valeur et crée une provision pour cette dépréciation. 2. Une société immobilière doit ajuster la valeur de ses bâtiments en raison d'une baisse du marché immobilier. 3. Un studio de design évalue la perte de valeur de ses équipements créatifs après plusieurs années d'utilisation. 4. Une entreprise de transport met une provision pour la dépréciation de ses camions vieillissants. 5. Un éditeur de logiciels ajuste la valeur de sa propriété intellectuelle en raison de l'émergence de nouvelles technologies. 6. Un constructeur automobile constate que certains modèles ne se vendent plus bien et prépare une provision pour leur dépréciation. 7. Un restaurant ajuste la valeur de son mobilier ancien qui a perdu de son attrait. 8. Une société de production audiovisuelle prend en compte la dépréciation de ses équipements de tournage. 9. Un cabinet médical observe que son matériel médical devient obsolète et crée une provision en conséquence. 10. Une entreprise de construction ajuste la valeur de ses machines après un certain temps d'utilisation. 11. Un musée doit établir une provision pour la dépréciation de ses œuvres d'art moins prisées. 12. Une société de télécommunications évalue la baisse de valeur de ses antennes anciennes. 13. Un club de sport met à jour la valeur de ses installations vieilles de plusieurs décennies. 14. Un opérateur de location de voitures doit créer une provision pour la dépréciation de son parc automobile. 15. Une entreprise de nettoyage évalue la perte de valeur de ses équipements de nettoyage avec le temps.",
"Le matériel de transport désigne tous les véhicules et équipements utilisés pour déplacer des personnes ou des marchandises, que ce soit par voie terrestre, aérienne ou maritime. Cela inclut les moyens de transport affectés au tourisme ou à l'usage du personnel d'une entreprise.\n\n1. Un bus utilisé pour transporter des employés au travail. 2. Un camion de livraison pour acheminer des marchandises. 3. Une voiture de société mise à disposition d'un salarié. 4. Un bateau de croisière pour le tourisme. 5. Un avion de ligne pour le transport de passagers. 6. Un train utilisé pour le transport de marchandises. 7. Un vélo de fonction pour les déplacements professionnels. 8. Un fourgon utilisé pour des services de dépannage. 9. Un hélicoptère pour des missions d'urgence ou de transport de personnes. 10. Un tramway utilisé pour les transports en commun. 11. Un ferry reliant deux rives pour le transport de véhicules. 12. Un autocar pour des excursions touristiques. 13. Un taxi pour le transport de personnes. 14. Un véhicule utilitaire léger (VUL) pour des travaux sur site. 15. Un scooter utilisé pour des livraisons rapides.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `all-nli-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### invoices_embedding_3
* Dataset: [invoices_embedding_3](https://huggingface.co/datasets/Noureddinesa/Invoices_embedding_3) at [16dc23e](https://huggingface.co/datasets/Noureddinesa/Invoices_embedding_3/tree/16dc23eadb0daa82573a6dc1a2c4321fa9bc727e)
* Size: 1,151 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 64.89 tokens</li><li>max: 120 tokens</li></ul> | <ul><li>min: 217 tokens</li><li>mean: 417.0 tokens</li><li>max: 648 tokens</li></ul> | <ul><li>min: 217 tokens</li><li>mean: 415.76 tokens</li><li>max: 655 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Une collectivité locale verse un acompte de 1 000 000 MAD pour un projet de construction de routes, afin de débuter les travaux d'infrastructure. Cet acompte est inscrit dans les comptes comme une avance sur immobilisations corporelles.</code> | <code>Les avances et acomptes sur immobilisations corporelles représentent des paiements anticipés effectués pour des biens durables, comme des équipements ou des bâtiments, avant leur réception.<br><br>1. Paiement d'un acompte pour l'achat d'une machine de production. 2. Versement d'une avance pour la construction d'un nouveau bâtiment. 3. Acompte payé pour un véhicule utilitaire. 4. Avance versée pour des travaux de rénovation d'un local commercial. 5. Paiement anticipé pour l'achat de matériel informatique. 6. Acompte pour une commande de mobilier de bureau. 7. Versement d'une avance pour une installation de panneaux solaires. 8. Paiement d'acompte pour des équipements de sécurité. 9. Avance versée pour la commande de matériel de laboratoire. 10. Acompte pour l'achat de machines agricoles. 11. Paiement anticipé pour des équipements sportifs. 12. Versement d'une avance pour des travaux d'aménagement paysager. 13. Acompte pour l'achat de matériel médical. 14. Paiement d'une avance pour des instal...</code> | <code>Les achats de matières et fournitures consommables concernent l'acquisition de biens qui sont utilisés ou consommés dans le cadre d'activités professionnelles. Cela inclut des produits qui ne sont pas destinés à être revendus mais à soutenir l'exploitation d'une entreprise.<br><br>1. Achat de papier pour imprimante pour le bureau. 2. Achat de produits de nettoyage pour entretenir les locaux. 3. Achat de vis et boulons pour des réparations en atelier. 4. Achat de produits alimentaires pour la cantine d'entreprise. 5. Achat de fournitures médicales pour un cabinet de santé. 6. Achat de matériel de jardinage pour l'entretien d'espaces verts. 7. Achat de matériel informatique (souris, claviers) pour les employés. 8. Achat de peinture pour rafraîchir les bureaux. 9. Achat de vêtements de travail pour les employés. 10. Achat de fournitures scolaires pour une école. 11. Achat de matériel de sécurité (casques, gants) pour un chantier. 12. Achat de récipients pour stocker des produits chimiques. 13. ...</code> |
| <code>Une société de sécurité engage un service de transport pour déplacer ses agents vers un événement spécial, avec des frais de 1 000 dirhams pour le transport aller-retour.</code> | <code>Les transports regroupent les frais liés au déplacement du personnel et au transport des marchandises lors des achats et des ventes. Ces coûts s'ajoutent aux frais d'acquisition d'immobilisations si le transport est lié à leur achat.<br><br>1. Frais de taxi pour des employés se rendant à une réunion. 2. Coût du transport de marchandises depuis un entrepôt jusqu'à un client. 3. Dépenses de livraison pour des produits achetés en gros. 4. Frais de transport pour un salon professionnel. 5. Coût d'envoi d'échantillons à des clients potentiels. 6. Remboursement des frais de transport pour des déplacements professionnels. 7. Paiement pour le transport de matériel lors d'un déménagement de bureau. 8. Coût de livraison d'une commande en ligne. 9. Frais de transport pour des produits retournés par des clients. 10. Dépenses liées au transport de fournitures de bureau. 11. Coût de transport pour des marchandises importées. 12. Remboursement des frais de carburant pour des trajets professionnels. 13. Pai...</code> | <code>Les redevances de crédit-bail sont les paiements effectués par une entreprise pour louer des biens matériels, comme des équipements ou des meubles, via un contrat de leasing. Ce contrat permet à l'entreprise de louer un bien avec la possibilité de l'acheter à la fin de la période de location. Les paiements sont enregistrés comme des charges et peuvent inclure la TVA récupérable.<br><br>1. Une entreprise loue des photocopieurs pour son bureau et paie chaque mois une redevance. 2. Une société de construction prend en location des machines pour un projet et paye des redevances mensuelles. 3. Un restaurant loue du mobilier de salle à manger sous un contrat de leasing. 4. Une clinique loue des équipements médicaux avec une option d'achat à la fin du contrat. 5. Un gymnase loue des appareils de fitness pour une durée déterminée. 6. Une entreprise de transport loue des camions pour ses opérations logistiques. 7. Une école loue des ordinateurs pour ses élèves avec une possibilité d'achat à la fin de...</code> |
| <code>Lors de l'importation de boissons gazeuses, l'entreprise AC doit payer des droits d'accise de 2 000 dirhams, qui seront comptabilisés comme impôts indirects.</code> | <code>Les impôts et taxes indirects sont des prélèvements que l'on paie lors de l'achat de biens ou de services, sans qu'ils soient directement inclus dans le prix. Ils peuvent inclure des droits de douane, des taxes sur la valeur ajoutée (TVA) ou d'autres charges qui s'ajoutent au coût initial.<br><br>1. Lors de l'importation d'un produit, le droit de douane appliqué en plus du prix d'achat. 2. La TVA ajoutée à l'achat d'un vêtement dans un magasin. 3. Les taxes sur les carburants lors du remplissage d'un réservoir de voiture. 4. Les droits d'accise sur l'achat d'alcool ou de tabac dans un commerce. 5. Les frais de transport international qui incluent des taxes de passage. 6. Les tarifs d'importation sur des produits électroniques. 7. Les taxes sur les services de télécommunication comme la téléphonie mobile. 8. Les droits sur les produits alimentaires importés. 9. Les taxes environnementales sur les emballages de produits. 10. Les frais de douane pour l'importation de meubles. 11. Les taxes sur ...</code> | <code>Le mobilier de bureau désigne l'ensemble des meubles utilisés dans un espace de travail, tels que les bureaux, chaises, tables et rangements, qui contribuent à l'organisation et au confort des employés.<br><br>1. Un bureau en bois massif dans un cabinet d'architecte. 2. Des chaises ergonomiques dans une salle de réunion. 3. Un espace de travail avec des tables modulables dans une start-up. 4. Des étagères pour ranger des dossiers dans un bureau administratif. 5. Un comptoir d'accueil dans une entreprise. 6. Des fauteuils confortables dans une salle d'attente. 7. Un bureau debout pour favoriser une meilleure posture. 8. Des meubles de rangement pour les fournitures de bureau. 9. Une table de conférence pour les réunions d'équipe. 10. Un bureau partagé dans un espace de coworking. 11. Des casiers pour les effets personnels des employés. 12. Un meuble TV dans une salle de pause. 13. Des panneaux de séparation pour créer des espaces privés. 14. Des meubles de rangement pour l'équipement informat...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### invoices_embedding_3
* Dataset: [invoices_embedding_3](https://huggingface.co/datasets/Noureddinesa/Invoices_embedding_3) at [16dc23e](https://huggingface.co/datasets/Noureddinesa/Invoices_embedding_3/tree/16dc23eadb0daa82573a6dc1a2c4321fa9bc727e)
* Size: 164 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 164 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 30 tokens</li><li>mean: 64.45 tokens</li><li>max: 123 tokens</li></ul> | <ul><li>min: 217 tokens</li><li>mean: 427.87 tokens</li><li>max: 648 tokens</li></ul> | <ul><li>min: 229 tokens</li><li>mean: 420.05 tokens</li><li>max: 655 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Une société de télécommunications paie un acompte de 300 000 MAD pour l'achat de nouveaux équipements de réseau. Cet acompte est essentiel pour le développement de l'infrastructure et est comptabilisé comme une avance sur immobilisations corporelles.</code> | <code>Les avances et acomptes sur immobilisations corporelles représentent des paiements anticipés effectués pour des biens durables, comme des équipements ou des bâtiments, avant leur réception.<br><br>1. Paiement d'un acompte pour l'achat d'une machine de production. 2. Versement d'une avance pour la construction d'un nouveau bâtiment. 3. Acompte payé pour un véhicule utilitaire. 4. Avance versée pour des travaux de rénovation d'un local commercial. 5. Paiement anticipé pour l'achat de matériel informatique. 6. Acompte pour une commande de mobilier de bureau. 7. Versement d'une avance pour une installation de panneaux solaires. 8. Paiement d'acompte pour des équipements de sécurité. 9. Avance versée pour la commande de matériel de laboratoire. 10. Acompte pour l'achat de machines agricoles. 11. Paiement anticipé pour des équipements sportifs. 12. Versement d'une avance pour des travaux d'aménagement paysager. 13. Acompte pour l'achat de matériel médical. 14. Paiement d'une avance pour des instal...</code> | <code>Les immobilisations corporelles en cours de matériel de transport représentent les dépenses engagées pour la fabrication ou l'acquisition de véhicules et équipements de transport que l'entreprise utilise pour ses activités. Ce compte reflète les coûts accumulés jusqu'à ce que le matériel soit prêt à être utilisé.<br><br>1. Coûts de fabrication d'un nouveau camion pour la livraison de produits. 2. Frais liés à l'assemblage d'un véhicule utilitaire. 3. Dépenses pour l'achat de pièces détachées pour un bus en construction. 4. Salaires des ouvriers travaillant sur un projet de fabrication de motos. 5. Coûts de recherche et développement pour un nouveau modèle de voiture. 6. Charges de location d'un espace de travail pour le montage de matériel de transport. 7. Dépenses d'outillage nécessaire à la production d'un véhicule. 8. Coûts de transport des matériaux nécessaires à la fabrication d'un véhicule. 9. Dépenses liées à la formation des employés sur un nouveau type de transport. 10. Coûts de cer...</code> |
| <code>La société E a acheté des petits outils nécessaires pour des réparations dans ses locaux, totalisant 600 dirhams, sans gestion de stock, payé par chèque.</code> | <code>Les achats non stockés de matières et de fournitures concernent les biens et services que l'entreprise utilise directement sans les conserver en stock, comme l'eau, l'électricité et d'autres fournitures jugées non nécessaires à stocker.<br><br>1. Achat d'eau pour les besoins d'une cantine d'entreprise. 2. Facture d'électricité pour le fonctionnement des bureaux. 3. Achat de papier et fournitures de bureau pour des projets ponctuels. 4. Achat de services de nettoyage pour les locaux de l'entreprise. 5. Paiement d'un abonnement à un service de cloud pour le stockage de données. 6. Achat de carburant pour les véhicules de l'entreprise. 7. Coût des services de télécommunication pour les employés. 8. Achat de petits outils utilisés lors de réparations, sans gestion de stock. 9. Frais d'entretien d'équipements sans pièces de rechange stockées. 10. Achat de matériel de sécurité pour un événement spécifique. 11. Coût de la publicité sur les réseaux sociaux. 12. Paiement pour des services de conseil ...</code> | <code>Le matériel de bureau désigne l'ensemble des équipements utilisés dans un bureau pour faciliter le travail administratif et organisationnel.<br><br>1. Une photocopieuse utilisée pour reproduire des documents. 2. Un ordinateur personnel pour gérer des fichiers et communiquer par email. 3. Une machine à écrire pour rédiger des lettres. 4. Un scanner pour numériser des documents. 5. Des chaises ergonomiques pour le confort des employés. 6. Un bureau pour travailler. 7. Des fournitures de papeterie comme des stylos et des blocs-notes. 8. Un projecteur pour faire des présentations. 9. Un tableau blanc pour brainstormer des idées. 10. Un fax pour envoyer des documents rapidement. 11. Des classeurs pour organiser les papiers. 12. Un téléphone pour la communication interne et externe. 13. Une imprimante pour produire des copies physiques de documents. 14. Un agenda pour planifier des réunions et des tâches. 15. Des câbles et accessoires pour connecter les appareils électroniques.</code> |
| <code>'Services Juridiques' a payé 12 000 dirhams pour des conseils juridiques avant l'achat d'un local commercial, ajoutant ce montant aux frais d'acquisition qui s'élèvent à 1,2 million de dirhams au total dans les comptes.</code> | <code>Les frais d'acquisition des immobilisations sont les coûts liés à l'achat d'actifs durables, comme les bâtiments, les machines ou les véhicules, incluant les frais de notaire, les commissions et autres dépenses nécessaires pour finaliser l'achat.<br><br>1. Les frais de notaire lors de l'achat d'un bâtiment commercial. 2. Les commissions versées à un agent immobilier pour l'achat d'un terrain. 3. Les honoraires d'un expert pour évaluer une machine avant son achat. 4. Les frais de transport pour livrer un équipement industriel. 5. Les frais d'inscription au registre foncier après l'achat d'un bien immobilier. 6. Les coûts de réparation nécessaires avant de mettre en service un nouvel équipement. 7. Les frais de courtage pour l'acquisition d'actions d'une société. 8. Les taxes de transfert de propriété lors de l'achat d'un véhicule. 9. Les frais de consultation pour des conseils juridiques sur un achat immobilier. 10. Les coûts d'audit pour vérifier la conformité des actifs avant l'acquisition....</code> | <code>Les rabais, remises et ristournes sont des réductions accordées lors de l'achat de biens ou de services. Ils permettent d'obtenir un prix plus bas sur les produits achetés.<br><br>1. Un magasin offre une remise de 20% sur un lot de peinture acheté pour des travaux de rénovation. <br>2. Lors d'une vente promotionnelle, un client reçoit un rabais de 15€ sur un meuble en bois. <br>3. Un fournisseur de matières premières accorde une ristourne de 5% sur les achats dépassant 1000€. <br>4. Un restaurant propose un rabais de 10% sur les commandes à emporter pendant le mois de janvier. <br>5. Une entreprise de vêtements offre une remise de 30% sur les articles de saison. <br>6. Lors d'un salon professionnel, un exposant propose une ristourne de 10% aux entreprises qui commandent plusieurs produits. <br>7. Un grossiste accorde une remise de 50€ sur l'achat de 1000€ de produits alimentaires. <br>8. Un client régulier reçoit un rabais de fidélité de 15% sur ses prochaines commandes. <br>9. Une librairie offre une risto...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | all-nli-test_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:----------------------------:|
| 1.375 | 100 | 0.5114 | 0.2881 | - |
| 2.75 | 200 | 0.0156 | 0.2060 | - |
| 2.9722 | 216 | - | - | 1.0 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu121
- Accelerate: 1.1.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
myselfrew/llama3_8b_self_gen_data_n40_filter_2e6_bz32_pack8192_2epoch
|
myselfrew
| 2024-12-14T17:48:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T17:45:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/L3.1-70B-Luminea-i1-GGUF
|
mradermacher
| 2024-12-14T17:46:43Z | 42 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Arkhiveus/L3.1-70B-Luminea",
"base_model:quantized:Arkhiveus/L3.1-70B-Luminea",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-12-14T08:57:03Z |
---
base_model: Arkhiveus/L3.1-70B-Luminea
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Arkhiveus/L3.1-70B-Luminea
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.1-70B-Luminea-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.1-70B-Luminea-i1-GGUF/resolve/main/L3.1-70B-Luminea.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
spoorthij27/t5-small-finetuned-cnn-news
|
spoorthij27
| 2024-12-14T17:44:11Z | 118 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-12-14T15:48:53Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- summarization
- generated_from_trainer
model-index:
- name: t5-small-finetuned-cnn-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5503 | 1.0 | 718 | 2.2792 |
| 1.7482 | 2.0 | 1436 | 2.2259 |
| 1.5977 | 3.0 | 2154 | 2.2442 |
| 1.4859 | 4.0 | 2872 | 2.2820 |
| 1.4016 | 5.0 | 3590 | 2.2973 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
CheeLi03/whisper-base-en-puct-5k
|
CheeLi03
| 2024-12-14T17:42:42Z | 78 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-12-14T15:17:02Z |
---
base_model: openai/whisper-base
datasets:
- fleurs
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Base English Punctuation 5k - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: en_us
split: None
args: 'config: en split: test'
metrics:
- type: wer
value: 19.829988851727983
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base English Punctuation 5k - Chee Li
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6360
- Wer: 19.8300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0204 | 5.3191 | 1000 | 0.4849 | 18.1368 |
| 0.0018 | 10.6383 | 2000 | 0.5678 | 18.4225 |
| 0.0009 | 15.9574 | 3000 | 0.6035 | 19.2795 |
| 0.0006 | 21.2766 | 4000 | 0.6268 | 19.6210 |
| 0.0005 | 26.5957 | 5000 | 0.6360 | 19.8300 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.20.3
|
CheeLi03/whisper-tiny-ar-puct-5k
|
CheeLi03
| 2024-12-14T17:39:20Z | 79 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-12-14T15:06:48Z |
---
base_model: openai/whisper-base
datasets:
- fleurs
language:
- ar
library_name: transformers
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Base Arabic - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: ar_eg
split: None
args: 'config: ar split: test'
metrics:
- type: wer
value: 41.818636022982766
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Arabic - Chee Li
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8130
- Wer: 41.8186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.1475 | 6.6667 | 1000 | 0.5516 | 41.1441 |
| 0.0072 | 13.3333 | 2000 | 0.6801 | 40.6570 |
| 0.0023 | 20.0 | 3000 | 0.7548 | 40.9443 |
| 0.0013 | 26.6667 | 4000 | 0.7970 | 41.4439 |
| 0.0009 | 33.3333 | 5000 | 0.8130 | 41.8186 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.20.1
|
xfcxcxcdfdfd/1-bit
|
xfcxcxcdfdfd
| 2024-12-14T17:39:07Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-13T15:54:14Z |
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
|
ahmedheakl/qwen2.5_1.5b_500k_16kcw_2ep
|
ahmedheakl
| 2024-12-14T17:34:58Z | 171 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-13T07:41:49Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5_1.5b_500k_16kcw_2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5_1.5b_500k_16kcw_2ep
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) on the anghabench dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:------:|:---------------:|
| 0.0013 | 0.9981 | 61000 | 0.0013 |
| 0.0008 | 1.9962 | 122000 | 0.0008 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Haesteining/Phi314Bv4
|
Haesteining
| 2024-12-14T17:32:45Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T17:28:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ayymen/Coqui-TTS-Vits-Multispeaker
|
ayymen
| 2024-12-14T17:27:40Z | 16 | 1 | null |
[
"tensorboard",
"text-to-speech",
"zgh",
"shi",
"rif",
"region:us"
] |
text-to-speech
| 2024-12-08T01:43:38Z |
---
language:
- zgh
- shi
- rif
pipeline_tag: text-to-speech
---
|
mradermacher/NorskGPT-Llama-7B-v0.1-GGUF
|
mradermacher
| 2024-12-14T17:24:57Z | 119 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"instruct",
"finetune",
"no",
"base_model:bineric/NorskGPT-Llama-7B-v0.1",
"base_model:quantized:bineric/NorskGPT-Llama-7B-v0.1",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-14T16:34:33Z |
---
base_model: bineric/NorskGPT-Llama-7B-v0.1
language:
- no
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- mistral
- instruct
- finetune
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bineric/NorskGPT-Llama-7B-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NorskGPT-Llama-7B-v0.1-GGUF/resolve/main/NorskGPT-Llama-7B-v0.1.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
zelk12/MT1-Gen4-gemma-2-9B-Q6_K-GGUF
|
zelk12
| 2024-12-14T17:13:34Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:zelk12/MT1-Gen4-gemma-2-9B",
"base_model:quantized:zelk12/MT1-Gen4-gemma-2-9B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-12-14T17:13:03Z |
---
base_model: zelk12/MT1-Gen4-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: gemma
pipeline_tag: text-generation
---
# zelk12/MT1-Gen4-gemma-2-9B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT1-Gen4-gemma-2-9B`](https://huggingface.co/zelk12/MT1-Gen4-gemma-2-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT1-Gen4-gemma-2-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/MT1-Gen4-gemma-2-9B-Q6_K-GGUF --hf-file mt1-gen4-gemma-2-9b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/MT1-Gen4-gemma-2-9B-Q6_K-GGUF --hf-file mt1-gen4-gemma-2-9b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/MT1-Gen4-gemma-2-9B-Q6_K-GGUF --hf-file mt1-gen4-gemma-2-9b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/MT1-Gen4-gemma-2-9B-Q6_K-GGUF --hf-file mt1-gen4-gemma-2-9b-q6_k.gguf -c 2048
```
|
kxbrow9/eharrisflux
|
kxbrow9
| 2024-12-14T17:10:58Z | 25 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-12-14T17:10:51Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: EHarrisFLUX
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# EHarrisFLUX
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `EHarrisFLUX` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF
|
mradermacher
| 2024-12-14T17:02:38Z | 23 | 0 |
transformers
|
[
"transformers",
"gguf",
"chemistry",
"code",
"text-generation-inference",
"en",
"zh",
"base_model:PetroGPT/Breeze-Petro-7B-Instruct-v1",
"base_model:quantized:PetroGPT/Breeze-Petro-7B-Instruct-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-12-14T14:21:49Z |
---
base_model: PetroGPT/Breeze-Petro-7B-Instruct-v1
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chemistry
- code
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/PetroGPT/Breeze-Petro-7B-Instruct-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q2_K.gguf) | i1-Q2_K | 3.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q4_0.gguf) | i1-Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.i1-Q6_K.gguf) | i1-Q6_K | 6.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF
|
mradermacher
| 2024-12-14T17:00:10Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"chemistry",
"code",
"text-generation-inference",
"en",
"zh",
"base_model:PetroGPT/Breeze-Petro-7B-Instruct-v1",
"base_model:quantized:PetroGPT/Breeze-Petro-7B-Instruct-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-14T13:40:10Z |
---
base_model: PetroGPT/Breeze-Petro-7B-Instruct-v1
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chemistry
- code
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PetroGPT/Breeze-Petro-7B-Instruct-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Breeze-Petro-7B-Instruct-v1-GGUF/resolve/main/Breeze-Petro-7B-Instruct-v1.f16.gguf) | f16 | 15.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
reasonwang/ToolGen-Qwen2.5-14B
|
reasonwang
| 2024-12-14T16:57:20Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-13T20:46:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
reasonwang/ToolGen-Qwen2.5-7B
|
reasonwang
| 2024-12-14T16:49:05Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-13T19:54:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mel762/char_teacher
|
mel762
| 2024-12-14T16:47:11Z | 8 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-12-14T16:47:06Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: char_teacher
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# char_teacher
<Gallery />
## Model description
char_teacher - a middle-aged Caucasian male teacher (early 40s), dark navy blue dress shirt, short side-parted dark hair with slight graying at temples, rectangular wire-rimmed silver glasses, pale skin, clean-shaven
## Trigger words
You should use `char_teacher` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/mel762/char_teacher/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
redrix/nepoticide-12B-Unslop-Unleashed-Mell-RPMax
|
redrix
| 2024-12-14T16:44:05Z | 13 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"base_model:merge:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"base_model:MarinaraSpaghetti/NemoMix-Unleashed-12B",
"base_model:merge:MarinaraSpaghetti/NemoMix-Unleashed-12B",
"base_model:TheDrummer/UnslopNemo-12B-v4.1",
"base_model:merge:TheDrummer/UnslopNemo-12B-v4.1",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:merge:inflatebot/MN-12B-Mag-Mell-R1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-08T14:28:08Z |
---
base_model:
- MarinaraSpaghetti/NemoMix-Unleashed-12B
- ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
- TheDrummer/UnslopNemo-12B-v4.1
- inflatebot/MN-12B-Mag-Mell-R1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
new_version: redrix/nepoticide-12B-Unslop-Unleashed-Mell-RPMax-v2
---
# <span style="color:red">This Model is most likely broken.</span>
- [This Discussion](https://huggingface.co/redrix/nepoticide-12B-Unslop-Unleashed-Mell-RPMax/discussions/1) shows there's a token leak. I forgot to specify a union tokenizer, although I don't know whether that's the exact cause.
- I've released v2 here: [redrix/nepoticide-12B-Unslop-Unleashed-Mell-RPMax-v2](https://huggingface.co/redrix/nepoticide-12B-Unslop-Unleashed-Mell-RPMax-v2)
- Proper README with info in the card of [v2](https://huggingface.co/redrix/nepoticide-12B-Unslop-Unleashed-Mell-RPMax-v2).
- This version will be left up for archival purposes. But may get deleted if it's obtrusive.
# nepoticide-12B-Unslop-Unleashed-Mell-RPMax
This is my third model. Please use [redrix/nepoticide-12B-Unslop-Unleashed-Mell-RPMax-v2](https://huggingface.co/redrix/nepoticide-12B-Unslop-Unleashed-Mell-RPMax-v2).
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [TheDrummer/UnslopNemo-12B-v4.1](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) as a base.
### Models Merged
The following models were included in the merge:
* [MarinaraSpaghetti/NemoMix-Unleashed-12B](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B)
* [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2)
* [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MarinaraSpaghetti/NemoMix-Unleashed-12B
- model: inflatebot/MN-12B-Mag-Mell-R1
- model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
- model: TheDrummer/UnslopNemo-12B-v4.1
base_model: TheDrummer/UnslopNemo-12B-v4.1
merge_method: model_stock
dtype: bfloat16
tokenizer_source: "inflatebot/MN-12B-Mag-Mell-R1"
chat_template: "chatml"
```
|
tanhasuffer/myface
|
tanhasuffer
| 2024-12-14T16:43:11Z | 22 | 1 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-12-14T15:59:48Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: subject
---
# Myface
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `subject` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tanhasuffer/myface', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
fitrahprakon09/mistral-model-python-codegent
|
fitrahprakon09
| 2024-12-14T16:39:08Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-12T18:44:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seregadgl/bge_v4_rev3
|
seregadgl
| 2024-12-14T16:31:52Z | 121 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T16:30:38Z |
---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atelierai-me/d21a2db9-73c2-4321-b912-a89f3d78d91c
|
atelierai-me
| 2024-12-14T16:31:11Z | 6 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-12-14T16:31:08Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: cowboy wearing a denim jacket, atelierai_sks_768
output:
url: samples/1734193864941__000002000_0.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: atelierai_sks_768
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# d21a2db9-73c2-4321-b912-a89f3d78d91c
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `atelierai_sks_768` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/atelierai-me/d21a2db9-73c2-4321-b912-a89f3d78d91c/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('atelierai-me/d21a2db9-73c2-4321-b912-a89f3d78d91c', weight_name='d21a2db9-73c2-4321-b912-a89f3d78d91c.safetensors')
image = pipeline('cowboy wearing a denim jacket, atelierai_sks_768').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
SedatAl/Rusty-Metal-Flux-Lora
|
SedatAl
| 2024-12-14T16:19:33Z | 27 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2024-12-14T15:47:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
texture a close up of a rusty metal surface with a lot of rust on it. The
surface is covered in a variety of colors, ranging from yellow to brown, and
the rust is visible on the surface
output:
url: images/example_hyfg8uvf3.png
- text: >-
texture a close up of a rusty metal surface with a lot of rust on it. The
metal is a deep gree and red color, indicating that it has been exposed to
the elements for some time
output:
url: images/example_u3u7rf22z.png
- text: >-
texture rusty metal surface with a lot of rust on it. The surface is covered
in a variety of colors, including green, and blue
output:
url: images/example_4jh9sxx39.png
- text: texture green rusty metal surface with red dots
output:
url: images/example_ly8g5ys07.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: texture
---
# Rusty Metal Flux Lora
<Gallery />
## Trigger words
You should use `texture` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/SedatAl/Rusty-Metal-Flux-Lora/tree/main) them in the Files & versions tab.
|
im-24-shevchenko/results
|
im-24-shevchenko
| 2024-12-14T16:14:31Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-13T16:56:56Z |
---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4195 | 1.0 | 125 | 0.5445 |
| 0.3163 | 2.0 | 250 | 0.3163 |
| 0.3961 | 3.0 | 375 | 0.2551 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.20.3
|
henrik-zeng/detr-finetuned-balloon-v2
|
henrik-zeng
| 2024-12-14T16:11:12Z | 191 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-12-14T16:11:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PrunaAI/appvoid-arco-exp-27-bnb-8bit-smashed
|
PrunaAI
| 2024-12-14T16:08:08Z | 6 | 0 | null |
[
"safetensors",
"llama",
"pruna-ai",
"base_model:appvoid/arco-exp-27",
"base_model:quantized:appvoid/arco-exp-27",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-12-14T16:07:15Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: appvoid/arco-exp-27
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo appvoid/arco-exp-27 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/appvoid-arco-exp-27-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("appvoid/arco-exp-27")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model appvoid/arco-exp-27 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
appvoid/arco-exp-31
|
appvoid
| 2024-12-14T16:08:05Z | 155 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:appvoid/arco-exp-12",
"base_model:merge:appvoid/arco-exp-12",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"base_model:h2oai/h2o-danube3-500m-base",
"base_model:merge:h2oai/h2o-danube3-500m-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T16:07:34Z |
---
base_model:
- appvoid/arco-exp-12
- appvoid/text-arco
- h2oai/h2o-danube3-500m-base
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [h2oai/h2o-danube3-500m-base](https://huggingface.co/h2oai/h2o-danube3-500m-base) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/arco-exp-12](https://huggingface.co/appvoid/arco-exp-12)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco-exp-12
- model: appvoid/text-arco
merge_method: model_stock
base_model: h2oai/h2o-danube3-500m-base
normalize: false
int8_mask: true
dtype: float16
```
|
PrunaAI/appvoid-arco-exp-29-bnb-8bit-smashed
|
PrunaAI
| 2024-12-14T16:07:55Z | 5 | 0 | null |
[
"safetensors",
"llama",
"pruna-ai",
"base_model:appvoid/arco-exp-29",
"base_model:quantized:appvoid/arco-exp-29",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-12-14T16:07:10Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: appvoid/arco-exp-29
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo appvoid/arco-exp-29 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/appvoid-arco-exp-29-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("appvoid/arco-exp-29")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model appvoid/arco-exp-29 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
rnjs1992/active-llm-winner-mean_margin_illegal20241214_032053
|
rnjs1992
| 2024-12-14T16:05:57Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T16:03:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rnjs1992/active-llm-winner-min_margin5000020241214_002341
|
rnjs1992
| 2024-12-14T16:02:53Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T12:51:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/arco-exp-30
|
appvoid
| 2024-12-14T16:02:33Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:appvoid/arco-2-reasoning-20k",
"base_model:merge:appvoid/arco-2-reasoning-20k",
"base_model:appvoid/arco-exp-12",
"base_model:merge:appvoid/arco-exp-12",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T16:02:00Z |
---
base_model:
- appvoid/arco-exp-12
- appvoid/arco-2-reasoning-20k
- appvoid/text-arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/arco-2-reasoning-20k](https://huggingface.co/appvoid/arco-2-reasoning-20k) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/arco-exp-12](https://huggingface.co/appvoid/arco-exp-12)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco-exp-12
- model: appvoid/text-arco
merge_method: model_stock
base_model: appvoid/arco-2-reasoning-20k
normalize: false
int8_mask: true
dtype: float16
```
|
Azeuss/tourist
|
Azeuss
| 2024-12-14T16:00:30Z | 168 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:21:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rnjs1992/active-llm-winner-mean_margin50000_iter220241214_031408
|
rnjs1992
| 2024-12-14T15:58:45Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T15:55:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/arco-exp-29
|
appvoid
| 2024-12-14T15:58:42Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:appvoid/arco",
"base_model:merge:appvoid/arco",
"base_model:appvoid/arco-exp-12",
"base_model:merge:appvoid/arco-exp-12",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:58:11Z |
---
base_model:
- appvoid/arco
- appvoid/text-arco
- appvoid/arco-exp-12
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/arco](https://huggingface.co/appvoid/arco) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
* [appvoid/arco-exp-12](https://huggingface.co/appvoid/arco-exp-12)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco-exp-12
- model: appvoid/text-arco
merge_method: model_stock
base_model: appvoid/arco
normalize: false
int8_mask: true
dtype: float16
```
|
appvoid/arco-exp-28
|
appvoid
| 2024-12-14T15:57:40Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:appvoid/arco",
"base_model:merge:appvoid/arco",
"base_model:appvoid/arco-exp-12",
"base_model:merge:appvoid/arco-exp-12",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:56:58Z |
---
base_model:
- appvoid/text-arco
- appvoid/arco-exp-12
- appvoid/arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/arco-exp-12](https://huggingface.co/appvoid/arco-exp-12) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
* [appvoid/arco](https://huggingface.co/appvoid/arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco
- model: appvoid/text-arco
merge_method: model_stock
base_model: appvoid/arco-exp-12
normalize: false
int8_mask: true
dtype: float16
```
|
dilarayavuz/twitter-synbkd-p10-bert-uncased
|
dilarayavuz
| 2024-12-14T15:57:04Z | 19 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"region:us"
] |
text-classification
| 2024-12-14T15:48:13Z |
---
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.15405049920082092
f1: 0.9420693630512057
precision: 0.950746558076401
recall: 0.9335491241431836
auc: 0.9845271068119602
accuracy: 0.9499002991026919
|
appvoid/arco-exp-27
|
appvoid
| 2024-12-14T15:56:48Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/arco-exp-12",
"base_model:merge:appvoid/arco-exp-12",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:55:56Z |
---
base_model:
- appvoid/arco-2
- appvoid/arco-exp-12
- appvoid/text-arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/arco-exp-12](https://huggingface.co/appvoid/arco-exp-12) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/arco-2](https://huggingface.co/appvoid/arco-2)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco-2
- model: appvoid/text-arco
merge_method: model_stock
base_model: appvoid/arco-exp-12
normalize: false
int8_mask: true
dtype: float16
```
|
appvoid/arco-exp-25
|
appvoid
| 2024-12-14T15:54:17Z | 151 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:appvoid/arco",
"base_model:merge:appvoid/arco",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/arco-2-openhermes",
"base_model:merge:appvoid/arco-2-openhermes",
"base_model:appvoid/arco-2-reasoning-20k",
"base_model:merge:appvoid/arco-2-reasoning-20k",
"base_model:appvoid/massive",
"base_model:merge:appvoid/massive",
"base_model:appvoid/palmer-004-turbo",
"base_model:merge:appvoid/palmer-004-turbo",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"base_model:h2oai/h2o-danube3-500m-base",
"base_model:merge:h2oai/h2o-danube3-500m-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:52:29Z |
---
base_model:
- appvoid/palmer-004-turbo
- appvoid/arco
- appvoid/arco-2-openhermes
- appvoid/arco-2-reasoning-20k
- appvoid/massive
- appvoid/arco-2
- appvoid/text-arco
- h2oai/h2o-danube3-500m-base
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [h2oai/h2o-danube3-500m-base](https://huggingface.co/h2oai/h2o-danube3-500m-base) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-004-turbo](https://huggingface.co/appvoid/palmer-004-turbo)
* [appvoid/arco](https://huggingface.co/appvoid/arco)
* [appvoid/arco-2-openhermes](https://huggingface.co/appvoid/arco-2-openhermes)
* [appvoid/arco-2-reasoning-20k](https://huggingface.co/appvoid/arco-2-reasoning-20k)
* [appvoid/massive](https://huggingface.co/appvoid/massive)
* [appvoid/arco-2](https://huggingface.co/appvoid/arco-2)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco-2-openhermes
- model: appvoid/arco-2-reasoning-20k
- model: appvoid/arco-2
- model: appvoid/text-arco
- model: appvoid/arco
- model: appvoid/palmer-004-turbo
- model: appvoid/massive
merge_method: model_stock
base_model: h2oai/h2o-danube3-500m-base
normalize: true
int8_mask: true
dtype: float16
```
|
Mamadou2727/Embedding_Approach_Sentiment_distilbert
|
Mamadou2727
| 2024-12-14T15:47:54Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T15:47:44Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Embedding_Approach_Sentiment_distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Embedding_Approach_Sentiment_distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1754
- F1: 0.9394
- Acc: 0.9395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.148 | 1.0 | 500 | 0.1942 | 0.9330 | 0.9325 |
| 0.1195 | 2.0 | 1000 | 0.1734 | 0.9354 | 0.936 |
| 0.0896 | 3.0 | 1500 | 0.1574 | 0.9312 | 0.931 |
| 0.0671 | 4.0 | 2000 | 0.1661 | 0.9366 | 0.9365 |
| 0.0465 | 5.0 | 2500 | 0.1754 | 0.9394 | 0.9395 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.4.0
- Datasets 3.2.0
- Tokenizers 0.21.0
|
dilarayavuz/twitter-stylebkd-p10-bert-uncased
|
dilarayavuz
| 2024-12-14T15:46:00Z | 19 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"region:us"
] |
text-classification
| 2024-12-14T15:37:08Z |
---
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.19777758419513702
f1: 0.9195028680688336
precision: 0.9235644324947186
recall: 0.9154768703597944
auc: 0.9721580156705242
accuracy: 0.9300432037221669
|
appvoid/arco-exp-23
|
appvoid
| 2024-12-14T15:43:25Z | 155 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"base_model:h2oai/h2o-danube3-500m-base",
"base_model:merge:h2oai/h2o-danube3-500m-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:42:24Z |
---
base_model:
- h2oai/h2o-danube3-500m-base
- appvoid/arco-2
- appvoid/text-arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [h2oai/h2o-danube3-500m-base](https://huggingface.co/h2oai/h2o-danube3-500m-base) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/arco-2](https://huggingface.co/appvoid/arco-2)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco-2
- model: appvoid/text-arco
merge_method: model_stock
base_model: h2oai/h2o-danube3-500m-base
normalize: false
int8_mask: true
dtype: float16
```
|
VERSIL91/04b10336-4481-4126-b9be-9298eea781e2
|
VERSIL91
| 2024-12-14T15:42:27Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"license:other",
"region:us"
] | null | 2024-12-14T09:42:29Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 04b10336-4481-4126-b9be-9298eea781e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: Qwen/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a7912e45a35e592a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a7912e45a35e592a_train_data.json
type:
field_instruction: prompt
field_output: completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/04b10336-4481-4126-b9be-9298eea781e2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/a7912e45a35e592a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 4056
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 04b10336-4481-4126-b9be-9298eea781e2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 04b10336-4481-4126-b9be-9298eea781e2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 04b10336-4481-4126-b9be-9298eea781e2
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6473 | 0.0000 | 1 | 1.3409 |
| 1.1967 | 0.0004 | 13 | 1.0586 |
| 0.5687 | 0.0009 | 26 | 0.7590 |
| 0.1759 | 0.0013 | 39 | 0.6794 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
appvoid/arco-exp-22
|
appvoid
| 2024-12-14T15:42:20Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:appvoid/arco",
"base_model:merge:appvoid/arco",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:41:29Z |
---
base_model:
- appvoid/arco-2
- appvoid/text-arco
- appvoid/arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/arco-2](https://huggingface.co/appvoid/arco-2) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
* [appvoid/arco](https://huggingface.co/appvoid/arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco
- model: appvoid/text-arco
merge_method: model_stock
base_model: appvoid/arco-2
normalize: false
int8_mask: true
dtype: float16
```
|
appvoid/arco-exp-21
|
appvoid
| 2024-12-14T15:41:09Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:appvoid/arco",
"base_model:merge:appvoid/arco",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:40:17Z |
---
base_model:
- appvoid/arco-2
- appvoid/arco
- appvoid/text-arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/text-arco](https://huggingface.co/appvoid/text-arco) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/arco-2](https://huggingface.co/appvoid/arco-2)
* [appvoid/arco](https://huggingface.co/appvoid/arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco
- model: appvoid/arco-2
merge_method: model_stock
base_model: appvoid/text-arco
normalize: false
int8_mask: true
dtype: float16
```
|
mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF
|
mradermacher
| 2024-12-14T15:38:03Z | 78 | 0 |
transformers
|
[
"transformers",
"gguf",
"dpo",
"de",
"en",
"base_model:VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct",
"base_model:quantized:VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-12-14T10:11:43Z |
---
base_model: VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct
extra_gated_button_content: Submit
extra_gated_fields:
Affiliation: text
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
Country: country
Date of birth: date_picker
First Name: text
Last Name: text
geo: ip_location
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version
Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use,
reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\"
means the specifications, manuals and documentation accompanying Meta Llama 3 distributed
by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you,
or your employer or any other person or entity (if you are entering into this Agreement
on such person or entity’s behalf), of the age required under applicable laws, rules
or regulations to provide legal consent and that has legal authority to bind your
employer or such other person or entity if you are entering in this Agreement on
their behalf.\n\"Meta Llama 3\" means the foundational large language models and
software and algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and other
elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama
Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation
(and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\"
means Meta Platforms Ireland Limited (if you are located in or, if you are an entity,
your principal place of business is in the EEA or Switzerland) and Meta Platforms,
Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights
and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Meta’s intellectual property
or other rights owned by Meta embodied in the Llama Materials to use, reproduce,
distribute, copy, create derivative works of, and make modifications to the Llama
Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the
Llama Materials (or any derivative works thereof), or a product or service that
uses any of them, including another AI model, you shall (A) provide a copy of this
Agreement with any such Llama Materials; and (B) prominently display “Built with
Meta Llama 3” on a related website, user interface, blogpost, about page, or product
documentation. If you use the Llama Materials to create, train, fine tune, or otherwise
improve an AI model, which is distributed or made available, you shall also include
“Llama 3” at the beginning of any such AI model name.\nii. If you receive Llama
Materials, or any derivative works thereof, from a Licensee as part of an integrated
end user product, then Section 2 of this Agreement will not apply to you.\niii.
You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies:
“Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright ©
Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials
must comply with applicable laws and regulations (including trade compliance laws
and regulations) and adhere to the Acceptable Use Policy for the Llama Materials
(available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated
by reference into this Agreement.\nv. You will not use the Llama Materials or any
output or results of the Llama Materials to improve any other large language model
(excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial
Terms. If, on the Meta Llama 3 version release date, the monthly active users of
the products or services made available by or for Licensee, or Licensee’s affiliates,
is greater than 700 million monthly active users in the preceding calendar month,
you must request a license from Meta, which Meta may grant to you in its sole discretion,
and you are not authorized to exercise any of the rights under this Agreement unless
or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty.
UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS
THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND
META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING,
WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,
OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING
THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY
RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4.
Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,
OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,
SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META
OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5.
Intellectual Property.\na. No trademark licenses are granted under this Agreement,
and in connection with the Llama Materials, neither Meta nor Licensee may use any
name or mark owned by or associated with the other or any of its affiliates, except
as required for reasonable and customary use in describing and redistributing the
Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license
to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence
of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible
at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising
out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s
ownership of Llama Materials and derivatives made by or for Meta, with respect to
any derivative works and modifications of the Llama Materials that are made by you,
as between you and Meta, you are and will be the owner of such derivative works
and modifications.\nc. If you institute litigation or other proceedings against
Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging
that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any
of the foregoing, constitutes infringement of intellectual property or other rights
owned or licensable by you, then any licenses granted to you under this Agreement
shall terminate as of the date such litigation or claim is filed or instituted.
You will indemnify and hold harmless Meta from and against any claim by any third
party arising out of or related to your use or distribution of the Llama Materials.\n6.
Term and Termination. The term of this Agreement will commence upon your acceptance
of this Agreement or access to the Llama Materials and will continue in full force
and effect until terminated in accordance with the terms and conditions herein.
Meta may terminate this Agreement if you are in breach of any term or condition
of this Agreement. Upon termination of this Agreement, you shall delete and cease
use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of
this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed
and construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International Sale
of Goods does not apply to this Agreement. The courts of California shall have exclusive
jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable
Use Policy\nMeta is committed to promoting safe and fair use of its tools and features,
including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable
Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n####
Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You
agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the
law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute
to, encourage, plan, incite, or further illegal or unlawful activity or content,
such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children,
including the solicitation, creation, acquisition, or dissemination of child exploitative
content or failure to report Child Sexual Abuse Material\n 3. Human trafficking,
exploitation, and sexual violence\n 4. The illegal distribution of information
or materials to minors, including obscene materials, or failure to employ legally
required age-gating in connection with such information or materials.\n 5.
Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote,
incite, or facilitate the harassment, abuse, threatening, or bullying of individuals
or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination
or other unlawful or harmful conduct in the provision of employment, employment
benefits, credit, housing, other economic benefits, or other essential goods and
services\n 4. Engage in the unauthorized or unlicensed practice of any profession
including, but not limited to, financial, legal, medical/health, or related professional
practices\n 5. Collect, process, disclose, generate, or infer health, demographic,
or other sensitive personal or private information about individuals without rights
and consents required by applicable laws\n 6. Engage in or facilitate any action
or generate any content that infringes, misappropriates, or otherwise violates any
third-party rights, including the outputs or results of any products or services
using the Llama Materials\n 7. Create, generate, or facilitate the creation of
malicious code, malware, computer viruses or do anything else that could disable,
overburden, interfere with or impair the proper working, integrity, operation or
appearance of a website or computer system\n2. Engage in, promote, incite, facilitate,
or assist in the planning or development of activities that present a risk of death
or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n
\ 1. Military, warfare, nuclear industries or applications, espionage, use for
materials or activities that are subject to the International Traffic Arms Regulations
(ITAR) maintained by the United States Department of State\n 2. Guns and illegal
weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled
substances\n 4. Operation of critical infrastructure, transportation technologies,
or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting,
and eating disorders\n 6. Any content intended to incite or promote violence,
abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive
or mislead others, including use of Meta Llama 3 related to the following:\n 1.
Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n
\ 2. Generating, promoting, or furthering defamatory content, including the creation
of defamatory statements, images, or other content\n 3. Generating, promoting,
or further distributing spam\n 4. Impersonating another individual without consent,
authorization, or legal right\n 5. Representing that the use of Meta Llama 3
or outputs are human-generated\n 6. Generating or facilitating false online engagement,
including fake reviews and other means of fake online engagement\n4. Fail to appropriately
disclose to end users any known dangers of your AI system\nPlease report any violation
of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:\n * Reporting issues with
the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting
violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
language:
- de
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: llama3
quantized_by: mradermacher
tags:
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-SauerkrautLM-70b-Instruct-i1-GGUF/resolve/main/Llama-3-SauerkrautLM-70b-Instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
wcyat/CantoneseLLMChat-v1.0-7B-Q5_K_M-GGUF
|
wcyat
| 2024-12-14T15:34:01Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:hon9kon9ize/CantoneseLLMChat-v1.0-7B",
"base_model:quantized:hon9kon9ize/CantoneseLLMChat-v1.0-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-14T15:33:38Z |
---
license: other
library_name: transformers
tags:
- llama-factory
- full
- generated_from_trainer
- llama-cpp
- gguf-my-repo
base_model: hon9kon9ize/CantoneseLLMChat-v1.0-7B
model-index:
- name: CantoneseLLMChat-v1.0-7B
results: []
---
# wcyat/CantoneseLLMChat-v1.0-7B-Q5_K_M-GGUF
This model was converted to GGUF format from [`hon9kon9ize/CantoneseLLMChat-v1.0-7B`](https://huggingface.co/hon9kon9ize/CantoneseLLMChat-v1.0-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/hon9kon9ize/CantoneseLLMChat-v1.0-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo wcyat/CantoneseLLMChat-v1.0-7B-Q5_K_M-GGUF --hf-file cantonesellmchat-v1.0-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo wcyat/CantoneseLLMChat-v1.0-7B-Q5_K_M-GGUF --hf-file cantonesellmchat-v1.0-7b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo wcyat/CantoneseLLMChat-v1.0-7B-Q5_K_M-GGUF --hf-file cantonesellmchat-v1.0-7b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo wcyat/CantoneseLLMChat-v1.0-7B-Q5_K_M-GGUF --hf-file cantonesellmchat-v1.0-7b-q5_k_m.gguf -c 2048
```
|
mradermacher/Xwin-Math-13B-V1.0-i1-GGUF
|
mradermacher
| 2024-12-14T15:33:47Z | 22 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Xwin-LM/Xwin-Math-13B-V1.0",
"base_model:quantized:Xwin-LM/Xwin-Math-13B-V1.0",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-14T11:41:51Z |
---
base_model: Xwin-LM/Xwin-Math-13B-V1.0
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF/resolve/main/Xwin-Math-13B-V1.0.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF
|
bartowski
| 2024-12-14T15:32:04Z | 1,141 | 0 | null |
[
"gguf",
"llama-3.3",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/openbuddy-llama3.3-70b-v24.1-131k",
"base_model:quantized:OpenBuddy/openbuddy-llama3.3-70b-v24.1-131k",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2024-12-14T11:24:49Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
tags:
- llama-3.3
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- fi
license: llama3.3
base_model: OpenBuddy/openbuddy-llama3.3-70b-v24.1-131k
---
## Llamacpp imatrix Quantizations of openbuddy-llama3.3-70b-v24.1-131k
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4273">b4273</a> for quantization.
Original model: https://huggingface.co/OpenBuddy/openbuddy-llama3.3-70b-v24.1-131k
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|role|>system<|says|>{system_prompt}<|end|>
<|role|>user<|says|>{prompt}<|end|>
<|role|>assistant<|says|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [openbuddy-llama3.3-70b-v24.1-131k-Q8_0.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/tree/main/openbuddy-llama3.3-70b-v24.1-131k-Q8_0) | Q8_0 | 74.98GB | true | Extremely high quality, generally unneeded but max available quant. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q6_K.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/tree/main/openbuddy-llama3.3-70b-v24.1-131k-Q6_K) | Q6_K | 57.89GB | true | Very high quality, near perfect, *recommended*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q5_K_M.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/tree/main/openbuddy-llama3.3-70b-v24.1-131k-Q5_K_M) | Q5_K_M | 49.95GB | true | High quality, *recommended*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q5_K_S.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q5_K_S.gguf) | Q5_K_S | 48.66GB | false | High quality, *recommended*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q4_K_M.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q4_K_M.gguf) | Q4_K_M | 42.52GB | false | Good quality, default size for most use cases, *recommended*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q4_K_S.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q4_K_S.gguf) | Q4_K_S | 40.35GB | false | Slightly lower quality with more space savings, *recommended*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q4_0.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q4_0.gguf) | Q4_0 | 40.12GB | false | Legacy format, offers online repacking for ARM CPU inference. |
| [openbuddy-llama3.3-70b-v24.1-131k-IQ4_NL.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-IQ4_NL.gguf) | IQ4_NL | 40.05GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q4_0_8_8.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q4_0_8_8.gguf) | Q4_0_8_8 | 39.97GB | false | Optimized for ARM and AVX inference. Requires 'sve' support for ARM (see details below). *Don't use on Mac*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q4_0_4_8.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q4_0_4_8.gguf) | Q4_0_4_8 | 39.97GB | false | Optimized for ARM inference. Requires 'i8mm' support (see details below). *Don't use on Mac*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q4_0_4_4.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q4_0_4_4.gguf) | Q4_0_4_4 | 39.97GB | false | Optimized for ARM inference. Should work well on all ARM chips, not for use with GPUs. *Don't use on Mac*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q3_K_XL.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q3_K_XL.gguf) | Q3_K_XL | 38.06GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [openbuddy-llama3.3-70b-v24.1-131k-IQ4_XS.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-IQ4_XS.gguf) | IQ4_XS | 37.90GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q3_K_L.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q3_K_L.gguf) | Q3_K_L | 37.14GB | false | Lower quality but usable, good for low RAM availability. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q3_K_M.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q3_K_M.gguf) | Q3_K_M | 34.27GB | false | Low quality. |
| [openbuddy-llama3.3-70b-v24.1-131k-IQ3_M.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-IQ3_M.gguf) | IQ3_M | 31.94GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q3_K_S.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q3_K_S.gguf) | Q3_K_S | 30.91GB | false | Low quality, not recommended. |
| [openbuddy-llama3.3-70b-v24.1-131k-IQ3_XS.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-IQ3_XS.gguf) | IQ3_XS | 29.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q2_K_L.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q2_K_L.gguf) | Q2_K_L | 27.40GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [openbuddy-llama3.3-70b-v24.1-131k-Q2_K.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-Q2_K.gguf) | Q2_K | 26.38GB | false | Very low quality but surprisingly usable. |
| [openbuddy-llama3.3-70b-v24.1-131k-IQ2_M.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-IQ2_M.gguf) | IQ2_M | 24.12GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [openbuddy-llama3.3-70b-v24.1-131k-IQ2_S.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-IQ2_S.gguf) | IQ2_S | 22.24GB | false | Low quality, uses SOTA techniques to be usable. |
| [openbuddy-llama3.3-70b-v24.1-131k-IQ2_XS.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-IQ2_XS.gguf) | IQ2_XS | 21.14GB | false | Low quality, uses SOTA techniques to be usable. |
| [openbuddy-llama3.3-70b-v24.1-131k-IQ2_XXS.gguf](https://huggingface.co/bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF/blob/main/openbuddy-llama3.3-70b-v24.1-131k-IQ2_XXS.gguf) | IQ2_XXS | 19.10GB | false | Very low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF --include "openbuddy-llama3.3-70b-v24.1-131k-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/openbuddy-llama3.3-70b-v24.1-131k-GGUF --include "openbuddy-llama3.3-70b-v24.1-131k-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (openbuddy-llama3.3-70b-v24.1-131k-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information</summary>
These are *NOT* for Metal (Apple) or GPU (nvidia/AMD/intel) offloading, only ARM chips (and certain AVX2/AVX512 CPUs).
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
If you're using a CPU that supports AVX2 or AVX512 (typically server CPUs and AMD's latest Zen5 CPUs) and are not offloading to a GPU, the Q4_0_8_8 may offer a nice speed as well:
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
appvoid/arco-exp-18
|
appvoid
| 2024-12-14T15:29:12Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/arco-2-reasoning-20k",
"base_model:merge:appvoid/arco-2-reasoning-20k",
"base_model:appvoid/palmer-004-turbo",
"base_model:merge:appvoid/palmer-004-turbo",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:28:30Z |
---
base_model:
- appvoid/arco-2-reasoning-20k
- appvoid/palmer-004-turbo
- appvoid/text-arco
- appvoid/arco-2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/arco-2](https://huggingface.co/appvoid/arco-2) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/arco-2-reasoning-20k](https://huggingface.co/appvoid/arco-2-reasoning-20k)
* [appvoid/palmer-004-turbo](https://huggingface.co/appvoid/palmer-004-turbo)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/text-arco
- model: appvoid/arco-2-reasoning-20k
- model: appvoid/palmer-004-turbo
merge_method: model_stock
base_model: appvoid/arco-2
normalize: false
int8_mask: true
dtype: float16
```
|
mradermacher/Xwin-Math-13B-V1.0-GGUF
|
mradermacher
| 2024-12-14T15:28:52Z | 12 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Xwin-LM/Xwin-Math-13B-V1.0",
"base_model:quantized:Xwin-LM/Xwin-Math-13B-V1.0",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-12-14T10:53:19Z |
---
base_model: Xwin-LM/Xwin-Math-13B-V1.0
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Xwin-Math-13B-V1.0-GGUF/resolve/main/Xwin-Math-13B-V1.0.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dada22231/ebfe422c-38d4-40e3-9a02-968b865b24f5
|
dada22231
| 2024-12-14T15:26:28Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"region:us"
] | null | 2024-12-14T15:17:16Z |
---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ebfe422c-38d4-40e3-9a02-968b865b24f5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 4
dataset_prepared_path: null
datasets:
- data_files:
- 412bfd271527b67e_train_data.json
ds_type: json
format: custom
num_proc: 4
path: /workspace/input_data/412bfd271527b67e_train_data.json
streaming: true
type:
field_input: ingress
field_instruction: title
field_output: article
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map:
? ''
: balanced_low_0
do_eval: true
early_stopping_patience: 1
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: true
hub_model_id: dada22231/ebfe422c-38d4-40e3-9a02-968b865b24f5
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 0.3
max_memory:
0: 65GB
1: 75GB
2: 75GB
3: 75GB
cpu: 96GB
max_steps: 50
micro_batch_size: 1
mixed_precision: bf16
mlflow_experiment_name: /tmp/412bfd271527b67e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: false
torch_dtype: bfloat16
train_on_inputs: false
trust_remote_code: true
use_cache: false
val_set_size: 50
wandb_entity: null
wandb_mode: online
wandb_name: ebfe422c-38d4-40e3-9a02-968b865b24f5
wandb_project: Public_TuningSN
wandb_runid: ebfe422c-38d4-40e3-9a02-968b865b24f5
warmup_ratio: 0.05
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# ebfe422c-38d4-40e3-9a02-968b865b24f5
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.537 | 0.0007 | 1 | 5.0871 |
| 4.6834 | 0.0171 | 25 | 4.5525 |
| 4.5846 | 0.0342 | 50 | 4.4408 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
appvoid/arco-exp-16
|
appvoid
| 2024-12-14T15:25:29Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/arco-2-reasoning-20k",
"base_model:merge:appvoid/arco-2-reasoning-20k",
"base_model:appvoid/arco-reflection",
"base_model:merge:appvoid/arco-reflection",
"base_model:appvoid/palmer-004-turbo",
"base_model:merge:appvoid/palmer-004-turbo",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:24:34Z |
---
base_model:
- appvoid/arco-reflection
- appvoid/arco-2-reasoning-20k
- appvoid/arco-2
- appvoid/text-arco
- appvoid/palmer-004-turbo
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/arco-2-reasoning-20k](https://huggingface.co/appvoid/arco-2-reasoning-20k) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/arco-reflection](https://huggingface.co/appvoid/arco-reflection)
* [appvoid/arco-2](https://huggingface.co/appvoid/arco-2)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
* [appvoid/palmer-004-turbo](https://huggingface.co/appvoid/palmer-004-turbo)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco-reflection
- model: appvoid/text-arco
- model: appvoid/arco-2
- model: appvoid/palmer-004-turbo
merge_method: model_stock
base_model: appvoid/arco-2-reasoning-20k
normalize: false
int8_mask: true
dtype: float16
```
|
MartinElMolon/comparacion_T5_congelado
|
MartinElMolon
| 2024-12-14T15:24:30Z | 113 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-12-14T12:35:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/arco-exp-14
|
appvoid
| 2024-12-14T15:22:35Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:appvoid/arco",
"base_model:merge:appvoid/arco",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/arco-2-reasoning-20k",
"base_model:merge:appvoid/arco-2-reasoning-20k",
"base_model:appvoid/arco-reflection",
"base_model:merge:appvoid/arco-reflection",
"base_model:appvoid/palmer-004-turbo",
"base_model:merge:appvoid/palmer-004-turbo",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T15:21:42Z |
---
base_model:
- appvoid/palmer-004-turbo
- appvoid/arco-reflection
- appvoid/arco-2
- appvoid/arco-2-reasoning-20k
- appvoid/arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [appvoid/arco](https://huggingface.co/appvoid/arco) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/palmer-004-turbo](https://huggingface.co/appvoid/palmer-004-turbo)
* [appvoid/arco-reflection](https://huggingface.co/appvoid/arco-reflection)
* [appvoid/arco-2](https://huggingface.co/appvoid/arco-2)
* [appvoid/arco-2-reasoning-20k](https://huggingface.co/appvoid/arco-2-reasoning-20k)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/arco-reflection
- model: appvoid/arco-2-reasoning-20k
- model: appvoid/arco-2
- model: appvoid/palmer-004-turbo
merge_method: model_stock
base_model: appvoid/arco
normalize: false
int8_mask: true
dtype: float16
```
|
Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b
|
Translation-EnKo
| 2024-12-14T15:16:41Z | 220 | 5 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"enko",
"ko",
"conversational",
"en",
"dataset:nayohan/aihub-en-ko-translation-12m",
"dataset:nayohan/instruction_en_ko_translation_1.4m",
"dataset:Translation-EnKo/trc_uniform_313k_eval_45_filtered",
"arxiv:2408.03541",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-24T05:18:06Z |
---
language:
- en
- ko
library_name: transformers
tags:
- translation
- enko
- ko
datasets:
- nayohan/aihub-en-ko-translation-12m
- nayohan/instruction_en_ko_translation_1.4m
- Translation-EnKo/trc_uniform_313k_eval_45_filtered
pipeline_tag: text-generation
metrics:
- sacrebleu
---
# **instructTrans-v2**

# **Introduction**
**exaone3-instrucTrans-v2-enko-7.8b** model is trained on **translation datasets(english->korean)** based on exaone-3-7.8B-it. To translate the English instruction dataset.
- [nayohan/aihub-en-ko-translation-12m](https://huggingface.co/datasets/nayohan/aihub-en-ko-translation-12m)
- [nayohan/instruction_en_ko_translation_1.4m](https://huggingface.co/datasets/nayohan/instruction_en_ko_translation_1.4m)
- [Translation-EnKo/trc_uniform_313k_eval_45_filtered](https://huggingface.co/datasets/Translation-EnKo/trc_uniform_313k_eval_45_filtered)
### **Generating Text**
This model supports translation from english to korean. To translate text, use the following Python code:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16
)
system_prompt="당신은 번역기 입니다. 영어를 한국어로 번역하세요."
sentence = "The aerospace industry is a flower in the field of technology and science."
conversation = [{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': sentence}]
inputs = tokenizer.apply_chat_template(
conversation,
tokenize=True,
add_generation_prompt=True,
return_tensors='pt'
).to("cuda")
outputs = model.generate(inputs, max_new_tokens=4096) # Finetuned with length 8192
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### **inference with vLLM**
<details>
<summary>추론 코드 접기/펼치기</summary>
<div markdown="1">
```bash
# Requires at least a 24 GB Vram GPU. If you have 12GB VRAM, you will need to run in FP8 mode.
python vllm_inference.py -gpu_id 0 -split_idx 0 -split_num 2 -dname "nvidia/HelpSteer" -untrans_col 'helpfulness' 'correctness' 'coherence' 'complexity' 'verbosity' > 0.out
python vllm_inference.py -gpu_id 1 -split_idx 1 -split_num 2 -dname "nvidia/HelpSteer" -untrans_col 'helpfulness' 'correctness' 'coherence' 'complexity' 'verbosity' > 1.out
```
```python
import os
import argparse
import pandas as pd
from tqdm import tqdm
from typing import List, Dict
from datasets import load_dataset, Dataset
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
# truncate sentences with more than 4096 tokens. # for same dataset size
def truncation_func(sample, column_name):
input_ids = tokenizer(str(sample[column_name]), truncation=True, max_length=4096, add_special_tokens=False).input_ids
output = tokenizer.decode(input_ids)
sample[column_name]=output
return sample
# convert to chat_template
def create_conversation(sample, column_name):
SYSTEM_PROMPT=f"당신은 번역기 입니다. 영어 문장을 한국어로 번역하세요."
messages=[
{"role":"system", "content": SYSTEM_PROMPT},
{"role":"user", "content":sample[column_name]}
]
text=tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
sample[column_name]=text
return sample
def load_dataset_preprocess(dataset_name:str, untranslate_column:List, split_num, split_idx, subset=None, num_proc=128) -> Dataset:
step = 100//split_num # split datasets
if subset:
dataset = load_dataset(dataset_name, subset, split=f'train[{step*split_idx}%:{step*(split_idx+1)}%]')
else:
dataset = load_dataset(dataset_name, split=f'train[{step*split_idx}%:{step*(split_idx+1)}%]')
print(dataset)
original_dataset = dataset # To leave columns untranslated
dataset = dataset.remove_columns(untranslate_column)
for feature in dataset.features:
dataset = dataset.map(lambda x: truncation_func(x,feature), num_proc=num_proc) #
dataset = dataset.map(lambda x: create_conversation(x,feature), batched=False, num_proc=num_proc)
print("filtered_dataset:", dataset)
return dataset, original_dataset
def save_dataset(result_dict:Dict, dataset_name, untranslate_column:List, split_idx, subset:str):
for column in untranslate_column:
result_dict[column] = original_dataset[column]
df = pd.DataFrame(result_dict)
output_file_name = dataset_name.split('/')[-1]
os.makedirs('gen', exist_ok=True)
if subset:
save_path = f"gen/{output_file_name}_{subset}_{split_idx}.jsonl"
else:
save_path = f"gen/{output_file_name}_{split_idx}.jsonl"
df.to_json(save_path, lines=True, orient='records', force_ascii=False)
if __name__=="__main__":
model_name = "Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
parser = argparse.ArgumentParser(description='load dataset name & split size')
parser.add_argument('-dname', type=str, default="Magpie-Align/Magpie-Pro-MT-300K-v0.1")
parser.add_argument('-untrans_col', nargs='+', default=[])
parser.add_argument('-split_num', type=int, default=4)
parser.add_argument('-split_idx', type=int, default=0)
parser.add_argument('-gpu_id', type=int, default=0)
parser.add_argument('-subset', type=str, default=None)
parser.add_argument('-num_proc', type=int, default=128)
args = parser.parse_args()
os.environ["CUDA_VISIBLE_DEVICES"]=str(args.gpu_id)
dataset, original_dataset = load_dataset_preprocess(args.dname,
args.untrans_col,
args.split_num,
args.split_idx,
args.subset,
args.num_proc
)
# define model
sampling_params = SamplingParams(
temperature=0,
max_tokens=8192,
)
llm = LLM(
model=model_name,
tensor_parallel_size=1,
gpu_memory_utilization=0.95,
)
# inference model
result_dict = {}
for feature in tqdm(dataset.features):
print(f"'{feature}' column in progress..")
outputs = llm.generate(dataset[feature], sampling_params)
result_dict[feature]=[output.outputs[0].text for output in outputs]
save_dataset(result_dict, args.dname, args.untrans_col, args.split_idx, args.subset)
print(f"saved to json. column: {feature}")
```
</div>
</details>
<br>
# Result
```
# EVAL_RESULT (2405_KO_NEWS) (max_new_tokens=512)
"en_ref":"This controversy arose around a new advertisement for the latest iPad Pro that Apple released on YouTube on the 7th. The ad shows musical instruments, statues, cameras, and paints being crushed in a press, followed by the appearance of the iPad Pro in their place. It appears to emphasize the new iPad Pro's artificial intelligence features, advanced display, performance, and thickness. Apple mentioned that the newly unveiled iPad Pro is equipped with the latest 'M4' chip and is the thinnest device in Apple's history. The ad faced immediate backlash upon release, as it graphically depicts objects symbolizing creators being crushed. Critics argue that the imagery could be interpreted as technology trampling on human creators. Some have also voiced concerns that it evokes a situation where creators are losing ground due to AI."
"ko_ref":"이번 논란은 애플이 지난 7일 유튜브에 공개한 신형 아이패드 프로 광고를 둘러싸고 불거졌다. 해당 광고 영상은 악기와 조각상, 카메라, 물감 등을 압착기로 짓누른 뒤 그 자리에 아이패드 프로를 등장시키는 내용이었다. 신형 아이패드 프로의 인공지능 기능들과 진화된 디스플레이와 성능, 두께 등을 강조하기 위한 취지로 풀이된다. 애플은 이번에 공개한 아이패드 프로에 신형 ‘M4’ 칩이 탑재되며 두께는 애플의 역대 제품 중 가장 얇다는 설명도 덧붙였다. 광고는 공개 직후 거센 비판에 직면했다. 창작자를 상징하는 물건이 짓눌려지는 과정을 지나치게 적나라하게 묘사한 점이 문제가 됐다. 기술이 인간 창작자를 짓밟는 모습을 묘사한 것으로 해석될 여지가 있다는 문제의식이다. 인공지능(AI)으로 인해 창작자가 설 자리가 줄어드는 상황을 연상시킨다는 목소리도 나왔다."
"exaone3-InstrucTrans-v2":"이번 논란은 애플이 지난 7일 유튜브에 공개한 최신형 아이패드 프로의 새 광고를 둘러싸고 불거졌다. 이 광고는 악기, 조각상, 카메라, 물감 등이 프레스기에 짓눌리는 장면에 이어 그 자리에 아이패드 프로가 등장하는 장면을 보여준다. 새로운 아이패드 프로의 인공지능 기능, 첨단 디스플레이, 성능, 두께를 강조하는 것으로 보인다. 애플은 이번에 공개된 아이패드 프로에 최신 'M4' 칩이 탑재됐으며, 애플 역사상 가장 얇은 두께를 자랑한다고 언급했다. 이 광고는 공개되자마자 크리에이터를 상징하는 사물들이 짓밟히는 장면을 그래픽으로 표현해 즉각적인 반발에 부딪혔다. 비평가들은 이 이미지가 기술이 인간 크리에이터를 짓밟는 것으로 해석될 수 있다고 주장한다. 일부에서는 AI로 인해 크리에이터들이 설 자리를 잃는 상황을 연상시킨다는 우려의 목소리도 나왔다."
"llama3-InstrucTrans":"이번 논란은 애플이 지난 7일 유튜브에 공개한 최신 아이패드 프로 광고를 중심으로 불거졌다. 이 광고는 악기, 조각상, 카메라, 물감 등을 누르기 시작하는 장면과 함께 그 자리에 아이패드 프로가 등장하는 장면을 보여준다. 이는 새로운 아이패드 프로의 인공지능 기능, 고급 디스플레이, 성능, 두께를 강조하는 것으로 보인다. 애플은 이번에 공개한 아이패드 프로에 최신 'M4' 칩이 탑재됐으며, 애플 역사상 가장 얇은 기기라고 언급했다. 이 광고는 출시하자마자 크리에이터를 상징하는 물건이 파쇄되는 장면이 그대로 그려져 논란이 되고 있다. 비평가들은 이 이미지가 기술이 인간 크리에이터를 짓밟는다는 의미로 해석될 수 있다고 주장한다. 또한 AI로 인해 크리에이터들이 밀리고 있다는 상황을 연상시킨다는 우려의 목소리도 나온다."
```
<br>
# **Evalution Result**
영어->한국어 번역 성능을 평가하기위한 데이터셋을 선정하여 평가를 진행하였습니다.
### **평가 데이터셋 출처**
- Aihub/FLoRes: [traintogpb/aihub-flores-koen-integrated-sparta-30k](https://huggingface.co/datasets/traintogpb/aihub-flores-koen-integrated-sparta-30k) | (test set 1k)
- iwslt-2023 : [shreevigneshs/iwslt-2023-en-ko-train-val-split-0.1](https://huggingface.co/datasets/shreevigneshs/iwslt-2023-en-ko-train-val-split-0.1) | (f_test 597, if_test 597)
- ko_news_2024: [nayohan/ko_news_eval40](https://huggingface.co/datasets/nayohan/ko_news_eval40) | (40)
### **모델 평가방법**
- 본 평가에서는 이전(hf)과 달리 vLLM을 활용하여 추론하여 평가하였습니다. (공통: max_new_tokens=512)
- 각 자세한 평가 내용은 기존의 instruct-Trans 결과를 따랐습니다. [[링크]](https://huggingface.co/nayohan/llama3-instrucTrans-enko-8b)
<br>
## **Average**
- vLLM을 활용하니 HF보다 전체적으로 점수가 낮아졌습니다.
### 모델 별 성능 비교
| 모델 이름 | AIHub | Flores | IWSLT | News | 평균 |
|:-------------------------------------------------------------------------------------------|:-------:|:-------:|:------:|:-------:|:-------:|
| **Meta-Llama** | | | | | |
| **meta-llama/Meta-Llama-3-8B-Instruct** | 0.3075 | 0.295 | 2.395 | 0.17 | 0.7919 |
| **nayohan/llama3-8b-it-translation-general-en-ko-1sent** | 15.7875 | 8.09 | 4.445 | 4.68 | 8.2506 |
| **nayohan/llama3-instrucTrans-enko-8b** | 16.3938 | 9.63 | 5.405 | 5.3225 | 9.1878 |
| **nayohan/llama3-8b-it-general-trc313k-enko-8k** | 14.7225 | 10.47 | 4.45 | 7.555 | 9.2994 |
| **Gemma** | | | | | |
| **Translation-EnKo/gemma-2-2b-it-general1.2m-trc313eval45** | 13.7775 | 7.88 | 3.95 | 6.105 | 7.9281 |
| **Translation-EnKo/gemma-2-9b-it-general1.2m-trc313eval45** | 18.9887 | 13.215 | 6.28 | 9.975 | 12.1147 |
| **Translation-EnKo/gukbap-gemma-2-9b-it-general1.2m-trc313eval45** | 18.405 | 12.44 | 6.59 | 9.64 | 11.7688 |
| **EXAONE** | | | | | |
| **CarrotAI/EXAONE-3.0-7.8B-Instruct-Llamafied-8k** | 4.9375 | 4.9 | 1.58 | 8.215 | 4.9081 |
| **Translation-EnKo/exaeon3-translation-general-enko-7.8b (private)** | 17.8275 | 8.56 | 2.72 | 6.31 | 8.8544 |
| **Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b** | 19.6075 | 13.46 | 7.28 | 11.4425 | **12.9475**|
### 학습 데이터셋 별 성능 분석
| 모델 이름 | AIHub | Flores | IWSLT | News | 평균 |
|--------------------------------------------------------------|---------|--------|-------|--------|-------------|
| **Meta-Llama** | | | | | |
| Meta-Llama-3-8B-Instruct | 0.3075 | 0.295 | 2.395 | 0.17 | **0.7919** |
| llama3-8b-it-general1.2m-en-ko-4k | 15.7875 | 8.09 | 4.445 | 4.68 | **8.2506** |
| llama3-8b-it-general1.2m-trc313k-enko-4k | 16.3938 | 9.63 | 5.405 | 5.3225 | **9.1878** |
| llama3-8b-it-general1.2m-trc313k-enko-8k | 14.7225 | 10.47 | 4.45 | 7.555 | **9.2994** |
| **Gemma** | | | | | |
| gemma-2-2b-it-general1.2m-trc313eval45 | 13.7775 | 7.88 | 3.95 | 6.105 | **7.9281** |
| gemma-2-9b-it-general1.2m-trc313eval45 | 18.9887 | 13.215 | 6.28 | 9.975 | **12.1147** |
| gukbap-gemma-2-9b-it-general1.2m-trc313eval45 | 18.405 | 12.44 | 6.59 | 9.64 | **11.7688** |
| **EXAONE** | | | | | |
| EXAONE-3.0-7.8B-Instruct | 4.9375 | 4.9 | 1.58 | 8.215 | **4.9081** |
| EXAONE-3.0-7.8B-Instruct-general12m (private) | 17.8275 | 8.56 | 2.72 | 6.31 | **8.8544** |
| EXAONE-3.0-7.8B-Instruct-general12m-trc1400k-trc313eval45 | 19.6075 | 13.46 | 7.28 | 11.4425| **12.9475** |
### **Citation**
```bibtex
@misc{InstrcTrans-v2,
title={exaone3-instrucTrans-v2-enko-7.8b},
author={Yohan Na, Suzie Oh, Eunji Kim, Mingyou sung},
year={2024},
url={https://huggingface.co/Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b}
}
```
```bibtex
@misc{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url={https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```
```bibtex
@article{exaone-3.0-7.8B-instruct,
title={EXAONE 3.0 7.8B Instruction Tuned Language Model},
author={LG AI Research},
journal={arXiv preprint arXiv:2408.03541},
year={2024}
}
```
```bibtex
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
|
PrunaAI/appvoid-arco-exp-09-bnb-8bit-smashed
|
PrunaAI
| 2024-12-14T15:08:48Z | 5 | 0 | null |
[
"safetensors",
"llama",
"pruna-ai",
"base_model:appvoid/arco-exp-09",
"base_model:quantized:appvoid/arco-exp-09",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-12-14T15:08:13Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: appvoid/arco-exp-09
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo appvoid/arco-exp-09 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/appvoid-arco-exp-09-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("appvoid/arco-exp-09")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model appvoid/arco-exp-09 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
dada22231/b2bdce73-b029-4c0d-9cab-4425ac192934
|
dada22231
| 2024-12-14T15:04:15Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | 2024-12-14T14:23:10Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b2bdce73-b029-4c0d-9cab-4425ac192934
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 4
dataset_prepared_path: null
datasets:
- data_files:
- dc40b002f7ed77e8_train_data.json
ds_type: json
format: custom
num_proc: 4
path: /workspace/input_data/dc40b002f7ed77e8_train_data.json
streaming: true
type:
field_instruction: en
field_output: id
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map:
? ''
: balanced_low_0
do_eval: true
early_stopping_patience: 1
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 32
gradient_checkpointing: true
group_by_length: true
hub_model_id: dada22231/b2bdce73-b029-4c0d-9cab-4425ac192934
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 0.3
max_memory:
0: 65GB
1: 75GB
2: 75GB
3: 75GB
cpu: 96GB
max_steps: 50
micro_batch_size: 1
mixed_precision: bf16
mlflow_experiment_name: /tmp/dc40b002f7ed77e8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: false
torch_dtype: bfloat16
train_on_inputs: false
trust_remote_code: true
use_cache: false
val_set_size: 50
wandb_entity: null
wandb_mode: online
wandb_name: b2bdce73-b029-4c0d-9cab-4425ac192934
wandb_project: Public_TuningSN
wandb_runid: b2bdce73-b029-4c0d-9cab-4425ac192934
warmup_ratio: 0.05
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# b2bdce73-b029-4c0d-9cab-4425ac192934
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.9133 | 0.0026 | 1 | 6.5665 |
| 1.4544 | 0.0658 | 25 | 1.6196 |
| 1.3318 | 0.1315 | 50 | 1.4853 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
seregadgl/bge_v4_rev2
|
seregadgl
| 2024-12-14T15:03:16Z | 119 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T15:01:52Z |
---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seonggyun/dreambooth_metal_nut
|
seonggyun
| 2024-12-14T15:02:46Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-12-14T15:00:16Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zelk12/MT1-Gen4-MUMA-gemma-2-9B
|
zelk12
| 2024-12-14T14:59:06Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT1-Gen4-MA-gemma-2-S5S4-9B",
"base_model:merge:zelk12/MT1-Gen4-MA-gemma-2-S5S4-9B",
"base_model:zelk12/MT1-Gen4-MU-gemma-2-S2S5-9B",
"base_model:merge:zelk12/MT1-Gen4-MU-gemma-2-S2S5-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:52:47Z |
---
base_model:
- zelk12/MT1-Gen4-MU-gemma-2-S2S5-9B
- zelk12/MT1-Gen4-MA-gemma-2-S5S4-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT1-Gen4-MU-gemma-2-S2S5-9B](https://huggingface.co/zelk12/MT1-Gen4-MU-gemma-2-S2S5-9B)
* [zelk12/MT1-Gen4-MA-gemma-2-S5S4-9B](https://huggingface.co/zelk12/MT1-Gen4-MA-gemma-2-S5S4-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT1-Gen4-MU-gemma-2-S2S5-9B
- model: zelk12/MT1-Gen4-MA-gemma-2-S5S4-9B
merge_method: slerp
base_model: zelk12/MT1-Gen4-MU-gemma-2-S2S5-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
ArkadiusDS/polberta-base-polish-manipulation
|
ArkadiusDS
| 2024-12-14T14:57:40Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T14:57:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/arco-exp-09
|
appvoid
| 2024-12-14T14:57:08Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/arco-chat-v0.1",
"base_model:merge:appvoid/arco-chat-v0.1",
"base_model:appvoid/arco-reflection",
"base_model:merge:appvoid/arco-reflection",
"base_model:appvoid/danube-reason-4ep",
"base_model:merge:appvoid/danube-reason-4ep",
"base_model:appvoid/danube-reasoner",
"base_model:merge:appvoid/danube-reasoner",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:52:38Z |
---
base_model:
- appvoid/danube-reason-4ep
- appvoid/arco-chat-v0.1
- appvoid/danube-reasoner
- appvoid/arco-reflection
- appvoid/arco-2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [appvoid/arco-2](https://huggingface.co/appvoid/arco-2) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/danube-reason-4ep](https://huggingface.co/appvoid/danube-reason-4ep)
* [appvoid/arco-chat-v0.1](https://huggingface.co/appvoid/arco-chat-v0.1)
* [appvoid/danube-reasoner](https://huggingface.co/appvoid/danube-reasoner)
* [appvoid/arco-reflection](https://huggingface.co/appvoid/arco-reflection)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/danube-reasoner
parameters:
density: 0.51
weight: 0.4
- model: appvoid/danube-reason-4ep
parameters:
density: 0.51
weight: 0.5
- model: appvoid/arco-chat-v0.1
parameters:
density: 0.51
weight: 0.3
- model: appvoid/arco-reflection
parameters:
density: 0.51
weight: 0.4
merge_method: ties
base_model: appvoid/arco-2
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
appvoid/arco-exp-08
|
appvoid
| 2024-12-14T14:51:33Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/arco-chat-v0.1",
"base_model:merge:appvoid/arco-chat-v0.1",
"base_model:appvoid/arco-reflection",
"base_model:merge:appvoid/arco-reflection",
"base_model:appvoid/danube-reason-4ep",
"base_model:merge:appvoid/danube-reason-4ep",
"base_model:appvoid/danube-reasoner",
"base_model:merge:appvoid/danube-reasoner",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:47:02Z |
---
base_model:
- appvoid/danube-reason-4ep
- appvoid/arco-2
- appvoid/danube-reasoner
- appvoid/arco-chat-v0.1
- appvoid/arco-reflection
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [appvoid/arco-2](https://huggingface.co/appvoid/arco-2) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/danube-reason-4ep](https://huggingface.co/appvoid/danube-reason-4ep)
* [appvoid/danube-reasoner](https://huggingface.co/appvoid/danube-reasoner)
* [appvoid/arco-chat-v0.1](https://huggingface.co/appvoid/arco-chat-v0.1)
* [appvoid/arco-reflection](https://huggingface.co/appvoid/arco-reflection)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/danube-reasoner
parameters:
density: 0.6
weight: 0.4
- model: appvoid/danube-reason-4ep
parameters:
density: 0.6
weight: 0.4
- model: appvoid/arco-chat-v0.1
parameters:
density: 0.6
weight: 0.3
- model: appvoid/arco-reflection
parameters:
density: 0.6
weight: 0.4
merge_method: ties
base_model: appvoid/arco-2
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
osiria/bert-italian-uncased-question-answering
|
osiria
| 2024-12-14T14:48:14Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"it",
"dataset:squad_it",
"arxiv:1810.04805",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-09T11:40:33Z |
---
license: apache-2.0
language:
- it
datasets:
- squad_it
widget:
- text: quale libro fu scritto da alessandro manzoni?
context: alessandro manzoni pubblicò la prima versione de i promessi sposi nel 1827
- text: in quali competizioni gareggia la ferrari?
context: la scuderia ferrari è una squadra corse italiana di formula 1 con sede a maranello
- text: quale sport è riferito alla serie a?
context: il campionato di serie a è la massima divisione professionistica del campionato italiano di calcio maschile
model-index:
- name: osiria/bert-italian-cased-question-answering
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_it
type: squad_it
metrics:
- type: exact-match
value: 0.6560
name: Exact Match
- type: f1
value: 0.7716
name: F1
pipeline_tag: question-answering
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> Task: Question Answering</span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: BERT</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> Type: Uncased</span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This is a <b>BERT</b> <b>[1]</b> uncased model for the <b>Italian</b> language, fine-tuned for <b>Extractive Question Answering</b> on the [SQuAD-IT](https://huggingface.co/datasets/squad_it) dataset <b>[2]</b>
If you are looking for a more accurate (but slightly heavier) model, you can refer to: https://huggingface.co/osiria/deberta-italian-question-answering
<b>update: version 2.0</b>
The 2.0 version further improves the performances by exploiting a 2-phases fine-tuning strategy: the model is first fine-tuned on the English SQuAD v2 (1 epoch, 20% warmup ratio, and max learning rate of 3e-5) then further fine-tuned on the Italian SQuAD (2 epochs, no warmup, initial learning rate of 3e-5)
In order to maximize the benefits of the multilingual procedure, [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) is used as a pre-trained model. When the double fine-tuning is completed, the embedding layer is then compressed as in [bert-base-italian-uncased](https://huggingface.co/osiria/bert-base-italian-uncased) to obtain a mono-lingual model size
<h3>Training and Performances</h3>
The model is trained to perform question answering, given a context and a question (under the assumption that the context contains the answer to the question). It has been fine-tuned for Extractive Question Answering, using the SQuAD-IT dataset, for 2 epochs with a linearly decaying learning rate starting from 3e-5, maximum sequence length of 384 and document stride of 128.
<br>The dataset includes 54.159 training instances and 7.609 test instances
The performances on the test set are reported in the following table:
| EM | F1 |
| ------ | ------ |
| 65.60 | 77.16 |
Testing notebook: https://huggingface.co/osiria/bert-italian-uncased-question-answering/blob/main/osiria_bert_italian_uncased_qa_evaluation.ipynb
<h3>Quick usage</h3>
```python
from transformers import BertTokenizerFast, BertForQuestionAnswering
from transformers import pipeline
tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-italian-uncased-question-answering")
model = BertForQuestionAnswering.from_pretrained("osiria/bert-italian-uncased-question-answering")
pipeline_qa = pipeline("question-answering", model = model, tokenizer = tokenizer)
pipeline_qa(context = "alessandro manzoni è nato a milano nel 1785", question = "dove è nato manzoni?")
{'score': 0.9905025959014893, 'start': 28, 'end': 34, 'answer': 'milano'}
```
<h3>References</h3>
[1] https://arxiv.org/abs/1810.04805
[2] https://link.springer.com/chapter/10.1007/978-3-030-03840-3_29
<h3>Limitations</h3>
This model was trained SQuAD-IT which is mainly a machine translated version of the original SQuAD v1.1. This means that the quality of the training set is limited by the machine translation.
Moreover, the model is meant to answer questions under the assumption that the required information is actually contained in the given context (which is the underlying assumption of SQuAD v1.1).
If the assumption is violated, the model will try to return an answer in any case, which is going to be incorrect.
<h3>License</h3>
The model is released under <b>Apache-2.0</b> license
|
osiria/deberta-base-italian-uncased-ner
|
osiria
| 2024-12-14T14:47:44Z | 158 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"token-classification",
"it",
"arxiv:2111.09543",
"arxiv:2010.05609",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-28T14:11:58Z |
---
license: mit
language:
- it
widget:
- text: "mi chiamo marco rossi, vivo a roma e lavoro per l'agenzia spaziale italiana"
example_title: "Example 1"
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> Task: Named Entity Recognition</span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: DeBERTa</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> Type: Uncased</span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This is a <b>DeBERTa</b> <b>[1]</b> uncased model for the <b>Italian</b> language, fine-tuned for <b>Named Entity Recognition</b> (<b>Person</b>, <b>Location</b>, <b>Organization</b> and <b>Miscellanea</b> classes) on the [WikiNER](https://figshare.com/articles/dataset/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) dataset <b>[2]</b>, using [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) as a pre-trained model.
<h3>Training and Performances</h3>
The model is trained to perform entity recognition over 4 classes: <b>PER</b> (persons), <b>LOC</b> (locations), <b>ORG</b> (organizations), <b>MISC</b> (miscellanea, mainly events, products and services). It has been fine-tuned for Named Entity Recognition, using the WikiNER Italian dataset plus an additional custom dataset of manually annotated Wikipedia paragraphs.
The WikiNER dataset has been splitted in 102.352 training instances and 25.588 test instances, and the model has been trained for 1 epoch with a constant learning rate of 1e-5.
The model has been first fine-tuned on WikiNER, then focused on the Italian language and turned to uncased by modifying the embedding layer (as in [3], computing document-level frequencies over the Wikipedia dataset), and lastly fine-tuned on an additional dataset of ~3.500 manually annotated lowercase paragraphs.
<h3>Quick usage</h3>
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
from transformers import pipeline
import re
import string
tokenizer = AutoTokenizer.from_pretrained("osiria/deberta-base-italian-uncased-ner")
model = AutoModelForTokenClassification.from_pretrained("osiria/deberta-base-italian-uncased-ner", num_labels = 5)
text = "mi chiamo marco rossi, vivo a roma e lavoro per l'agenzia spaziale italiana nella missione prisma"
for p in string.punctuation:
text = text.replace(p, " " + p + " ")
ner = pipeline("ner", model=model, tokenizer=tokenizer)
ner(text, aggregation_strategy="simple")
[{'entity_group': 'PER',
'score': 0.9929623,
'word': 'marco rossi',
'start': 9,
'end': 21},
{'entity_group': 'LOC',
'score': 0.9898509,
'word': 'roma',
'start': 31,
'end': 36},
{'entity_group': 'ORG',
'score': 0.9905911,
'word': 'agenzia spaziale italiana',
'start': 53,
'end': 79},
{'entity_group': 'MISC',
'score': 0.92474234,
'word': 'missione prisma',
'start': 85,
'end': 101}]
```
<h3>References</h3>
[1] https://arxiv.org/abs/2111.09543
[2] https://www.sciencedirect.com/science/article/pii/S0004370212000276
[3] https://arxiv.org/abs/2010.05609
<h3>Limitations</h3>
This model is mainly trained on Wikipedia, so it's particularly suitable for natively digital text from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). However, it may show limitations when it comes to chaotic text, containing errors and slang expressions
(like social media posts) or when it comes to domain-specific text (like medical, financial or legal content).
<h3>License</h3>
The model is released under <b>MIT</b> license
|
appvoid/arco-exp-07
|
appvoid
| 2024-12-14T14:46:44Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:appvoid/arco-chat-v0.1",
"base_model:merge:appvoid/arco-chat-v0.1",
"base_model:appvoid/arco-reflection",
"base_model:merge:appvoid/arco-reflection",
"base_model:appvoid/cubby-chat",
"base_model:merge:appvoid/cubby-chat",
"base_model:appvoid/danube-reason-4ep",
"base_model:merge:appvoid/danube-reason-4ep",
"base_model:appvoid/danube-reasoner",
"base_model:merge:appvoid/danube-reasoner",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:41:51Z |
---
base_model:
- appvoid/cubby-chat
- appvoid/arco-reflection
- appvoid/danube-reason-4ep
- appvoid/danube-reasoner
- appvoid/arco-chat-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [appvoid/arco-reflection](https://huggingface.co/appvoid/arco-reflection) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/cubby-chat](https://huggingface.co/appvoid/cubby-chat)
* [appvoid/danube-reason-4ep](https://huggingface.co/appvoid/danube-reason-4ep)
* [appvoid/danube-reasoner](https://huggingface.co/appvoid/danube-reasoner)
* [appvoid/arco-chat-v0.1](https://huggingface.co/appvoid/arco-chat-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/danube-reasoner
parameters:
density: 0.6
weight: 0.4
- model: appvoid/danube-reason-4ep
parameters:
density: 0.6
weight: 0.4
- model: appvoid/arco-chat-v0.1
parameters:
density: 0.6
weight: 0.3
- model: appvoid/cubby-chat
parameters:
density: 0.6
weight: 0.4
merge_method: ties
base_model: appvoid/arco-reflection
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
jonathansuru/fon_discriminator_checkpoint
|
jonathansuru
| 2024-12-14T14:44:39Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-12-14T14:44:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PrunaAI/Translation-EnKo-exaone3-instrucTrans-v2-enko-7.8b-bnb-8bit-smashed
|
PrunaAI
| 2024-12-14T14:43:53Z | 6 | 0 | null |
[
"safetensors",
"llama",
"pruna-ai",
"base_model:Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b",
"base_model:quantized:Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-12-14T14:33:14Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Translation-EnKo-exaone3-instrucTrans-v2-enko-7.8b-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Translation-EnKo/exaone3-instrucTrans-v2-enko-7.8b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
aehrm/gepabert
|
aehrm
| 2024-12-14T14:42:41Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-13T00:01:39Z |
---
language: de
license: mit
metrics:
- accuracy
model-index:
- name: GePaBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GePaBERT
This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on a corpus of parliamentary speeches held in the German Bundestag.
It was specifically designed for the KONVENS 2023 shared task on speaker attribution.
It achieves the following results on the evaluation set:
- Loss: 0.7997
- Accuracy: 0.8020
## Training and evaluation data
The corpus of parliamentary speeches covers speeches held in the German Bundestag during the 9th-20th legislative period, from 1980 to April 2023. (757 MB)
The speeches were automatically prepared from the publicly available [plenary protocols](https://www.bundestag.de/services/opendata), using the
extraction pipeline [Open Discourse](https://opendiscourse.de) ([GitHub code](https://github.com/open-discourse/open-discourse)).
Evaluation was done on a randomly-sampled 5% held-out dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 2e-05
- `train_batch_size`: 8
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `num_epochs`: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:---------------:|
| 1.0697 | 0.1 | 3489 | 0.7697 | 0.9802 |
| 1.0339 | 0.2 | 6978 | 0.7727 | 0.9562 |
| 1.0203 | 0.3 | 10467 | 0.7739 | 0.9463 |
| 1.0215 | 0.4 | 13956 | 0.7743 | 0.9477 |
| 1.0046 | 0.5 | 17445 | 0.7779 | 0.9299 |
| 1.0036 | 0.6 | 20934 | 0.7764 | 0.9372 |
| 1.2439 | 0.7 | 24423 | 0.7352 | 1.2473 |
| 1.4382 | 0.8 | 27912 | 0.6947 | 1.5782 |
| 1.1744 | 0.9 | 31401 | 0.7764 | 0.9360 |
| 0.9718 | 1.0 | 34890 | 0.7799 | 0.9179 |
| 0.9557 | 1.1 | 38379 | 0.7824 | 0.9038 |
| 0.947 | 1.2 | 41868 | 0.7830 | 0.9000 |
| 0.9487 | 1.3 | 45357 | 0.7833 | 0.8982 |
| 0.9457 | 1.4 | 48846 | 0.7851 | 0.8862 |
| 0.9442 | 1.5 | 52335 | 0.7863 | 0.8839 |
| 0.9473 | 1.6 | 55824 | 0.7850 | 0.8855 |
| 0.9388 | 1.7 | 59313 | 0.7865 | 0.8771 |
| 0.9293 | 1.8 | 62802 | 0.7868 | 0.8805 |
| 0.9242 | 1.9 | 66291 | 0.7873 | 0.8738 |
| 0.9241 | 2.0 | 69780 | 0.7872 | 0.8757 |
| 0.9127 | 2.1 | 73269 | 0.7896 | 0.8641 |
| 0.9114 | 2.2 | 76758 | 0.7900 | 0.8627 |
| 0.9095 | 2.3 | 80247 | 0.7913 | 0.8540 |
| 0.9042 | 2.4 | 83736 | 0.7920 | 0.8518 |
| 0.8999 | 2.5 | 87225 | 0.7919 | 0.8514 |
| 0.899 | 2.6 | 90714 | 0.7918 | 0.8543 |
| 0.8945 | 2.7 | 94203 | 0.7935 | 0.8418 |
| 0.8867 | 2.8 | 97692 | 0.7934 | 0.8437 |
| 0.893 | 2.9 | 101181 | 0.7938 | 0.8414 |
| 0.8798 | 3.0 | 104670 | 0.7951 | 0.8359 |
| 0.868 | 3.1 | 108159 | 0.7943 | 0.8375 |
| 0.8736 | 3.2 | 111648 | 0.7956 | 0.8323 |
| 0.8756 | 3.3 | 115137 | 0.7959 | 0.8315 |
| 0.8681 | 3.4 | 118626 | 0.7964 | 0.8258 |
| 0.8726 | 3.5 | 122115 | 0.7966 | 0.8266 |
| 0.8594 | 3.6 | 125604 | 0.7967 | 0.8246 |
| 0.8515 | 3.7 | 129093 | 0.7973 | 0.8227 |
| 0.8568 | 3.8 | 132582 | 0.7979 | 0.8195 |
| 0.8626 | 3.9 | 136071 | 0.7983 | 0.8173 |
| 0.8585 | 4.0 | 139560 | 0.7978 | 0.8190 |
| 0.8497 | 4.1 | 143049 | 0.7991 | 0.8127 |
| 0.8383 | 4.2 | 146538 | 0.7992 | 0.8154 |
| 0.8457 | 4.3 | 150027 | 0.8002 | 0.8080 |
| 0.8353 | 4.4 | 153516 | 0.8005 | 0.8077 |
| 0.8393 | 4.5 | 157005 | 0.8009 | 0.8027 |
| 0.8417 | 4.6 | 160494 | 0.8050 | 0.8007 |
| 0.836 | 4.7 | 163983 | 0.8004 | 0.8017 |
| 0.8317 | 4.8 | 167472 | 0.7993 | 0.8021 |
| 0.832 | 4.9 | 170961 | 0.8011 | 0.8013 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
appvoid/arco-exp-06
|
appvoid
| 2024-12-14T14:40:18Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:appvoid/arco-2",
"base_model:merge:appvoid/arco-2",
"base_model:appvoid/arco-chat-v0.1",
"base_model:merge:appvoid/arco-chat-v0.1",
"base_model:appvoid/arco-reflection",
"base_model:merge:appvoid/arco-reflection",
"base_model:appvoid/palmer-004-turbo",
"base_model:merge:appvoid/palmer-004-turbo",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:35:49Z |
---
base_model:
- appvoid/arco-chat-v0.1
- appvoid/palmer-004-turbo
- appvoid/arco-2
- appvoid/arco-reflection
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [appvoid/arco-reflection](https://huggingface.co/appvoid/arco-reflection) as a base.
### Models Merged
The following models were included in the merge:
* [appvoid/arco-chat-v0.1](https://huggingface.co/appvoid/arco-chat-v0.1)
* [appvoid/palmer-004-turbo](https://huggingface.co/appvoid/palmer-004-turbo)
* [appvoid/arco-2](https://huggingface.co/appvoid/arco-2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/palmer-004-turbo
parameters:
density: 0.6
weight: 0.4
- model: appvoid/arco-2
parameters:
density: 0.6
weight: 0.5
- model: appvoid/arco-chat-v0.1
parameters:
density: 0.6
weight: 0.4
merge_method: ties
base_model: appvoid/arco-reflection
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
HelpingAI/Cipher-20B
|
HelpingAI
| 2024-12-14T14:37:01Z | 153 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"HelpingAI",
"Cipher",
"Code Generation",
"Programming",
"AI Assistant",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:30:12Z |
---
license: other
license_name: helpingai
license_link: https://helpingai.co/license
pipeline_tag: text-generation
language:
- en
tags:
- HelpingAI
- Cipher
- Code Generation
- Programming
- AI Assistant
library_name: transformers
---
<div align="center">
💻 <span style="background: linear-gradient(45deg, #FF6347, #FFD700); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">Cipher-20B</span>
</div>
<div align="center" style="display: flex; justify-content: center; gap: 4px;">
<a href="https://github.com/HelpingAI"><img src="https://img.shields.io/badge/GitHub-Organization-blue.svg" alt="GitHub Organization"></a>
<a href="https://huggingface.co/HelpingAI"><img src="https://img.shields.io/badge/🤗%20Hugging%20Face-Organization-yellow" alt="Hugging Face"></a>
<a href="https://helpingai.co/license"><img src="https://img.shields.io/badge/License-HelpingAI-green.svg" alt="Model License"></a>
<a href="https://github.com/HelpingAI/community/discussions"><img src="https://img.shields.io/badge/Join-Community%20Discussion-blue?style=for-the-badge&logo=github" alt="Join Community Discussion"></a>
</div>
<div align="center">
[📜 License](https://helpingai.co/license) | [🌐 Website](https://helpingai.co)
</div>
<div align="center" style="display: flex; justify-content: center; gap: 4px;">
<img src="https://img.shields.io/badge/Model%20Size-20B-ff6347" alt="Model Size">
<img src="https://img.shields.io/badge/Task-Code%20Generation-blue" alt="Task">
<img src="https://img.shields.io/badge/Deployment-Efficient%20&%20Fast-yellow" alt="Deployment Speed">
</div>
## 🌟 <span style="background: linear-gradient(45deg, #FF6347, #FFD700); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">About Cipher-20B</span>
**Cipher-20B** is a 20 billion parameter causal language model designed for code generation.
### 💻 <span style="background: linear-gradient(45deg, #FF6347, #FFD700); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">Implementation</span>
### <span style="color: #FF6347;">Using Transformers</span>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load Cipher-20B
model = AutoModelForCausalLM.from_pretrained("HelpingAI/Cipher-20B")
tokenizer = AutoTokenizer.from_pretrained("HelpingAI/Cipher-20B")
# Example usage
code_task = [
{"role": "system", "content": "You are Cipher"},
{"role": "user", "content": "Write a Python function to calculate the Fibonacci sequence."}
]
inputs = tokenizer.apply_chat_template(
code_task,
add_generation_prompt=True,
return_tensors="pt"
)
outputs = model.generate(
inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## ⚙️ <span style="background: linear-gradient(45deg, #FF6347, #FFD700); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">Training Details</span>
### <span style="color: #FF6347;">Training Data</span>
* Trained on a large dataset of code, programming tasks, and technical documentation.
* Fine-tuned for multiple programming languages like Python, JavaScript, and C++.
### <span style="color: #FFD700;">Capabilities</span>
* Generates code in multiple languages.
* Detects and corrects common coding errors.
* Provides clear explanations of code.
## ⚠️ <span style="background: linear-gradient(45deg, #FF6347, #FFD700); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">Limitations</span>
* May generate verbose code depending on the input.
* Long code generation may exceed token limits.
* Ambiguous instructions can lead to incomplete or incorrect code.
* Prioritizes efficiency in code generation.
### <span style="color: #FF6347;">Safety</span>
* Avoids generating harmful or malicious code.
* Will not assist with illegal or unethical activities.
## 📚 <span style="background: linear-gradient(45deg, #FF6347, #FFD700); -webkit-background-clip: text; -webkit-text-fill-color: transparent;">Citation</span>
```bibtex
@misc{cipher2024,
author = {Abhay Koul},
title = {Cipher-20B: Your Ultimate Code Buddy},
year = {2024},
publisher = {HelpingAI},
journal = {HuggingFace},
howpublished = {\url{https://huggingface.co/HelpingAI/Cipher-20B}}
}
```
*Built with dedication, precision, and passion by HelpingAI*
[Website](https://helpingai.co) • [GitHub](https://github.com/HelpingAI) • [Discord](https://discord.gg/YweJwNqrnH) • [HuggingFace](https://huggingface.co/HelpingAI)
|
duongntd2/erax_sft_rank64_awq4bit
|
duongntd2
| 2024-12-14T14:34:13Z | 64 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
image-text-to-text
| 2024-12-14T13:38:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seregadgl/bge_v4_rev1
|
seregadgl
| 2024-12-14T14:34:10Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"cross-encoder",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T14:32:32Z |
---
library_name: transformers
tags:
- cross-encoder
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
appvoid/arco-exp-04
|
appvoid
| 2024-12-14T14:27:06Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:appvoid/massive",
"base_model:merge:appvoid/massive",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:26:19Z |
---
base_model:
- appvoid/massive
- appvoid/text-arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [appvoid/massive](https://huggingface.co/appvoid/massive)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/text-arco
- model: appvoid/massive
merge_method: slerp
base_model: appvoid/massive
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: massive for input & output, text-arco in the middle layers
```
|
appvoid/arco-exp-03
|
appvoid
| 2024-12-14T14:26:11Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:appvoid/massive",
"base_model:merge:appvoid/massive",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:25:21Z |
---
base_model:
- appvoid/text-arco
- appvoid/massive
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
* [appvoid/massive](https://huggingface.co/appvoid/massive)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/text-arco
- model: appvoid/massive
merge_method: slerp
base_model: appvoid/text-arco
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: text-arco for input & output, massive in the middle layers
```
|
appvoid/arco-exp-02
|
appvoid
| 2024-12-14T14:24:37Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:appvoid/massive",
"base_model:merge:appvoid/massive",
"base_model:appvoid/text-arco",
"base_model:merge:appvoid/text-arco",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:23:46Z |
---
base_model:
- appvoid/massive
- appvoid/text-arco
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [appvoid/massive](https://huggingface.co/appvoid/massive)
* [appvoid/text-arco](https://huggingface.co/appvoid/text-arco)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: appvoid/text-arco
layer_range: [0, 16]
- model: appvoid/massive
layer_range: [0, 16]
merge_method: slerp
base_model: appvoid/massive
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
|
dsakerkwq/b2bdce73-b029-4c0d-9cab-4425ac192934
|
dsakerkwq
| 2024-12-14T14:22:51Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | 2024-12-14T14:04:19Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b2bdce73-b029-4c0d-9cab-4425ac192934
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 4
dataset_prepared_path: null
datasets:
- data_files:
- dc40b002f7ed77e8_train_data.json
ds_type: json
format: custom
num_proc: 4
path: /workspace/input_data/dc40b002f7ed77e8_train_data.json
streaming: true
type:
field_instruction: en
field_output: id
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: balanced
do_eval: true
early_stopping_patience: 1
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: true
hub_model_id: dsakerkwq/b2bdce73-b029-4c0d-9cab-4425ac192934
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
1: 75GB
2: 75GB
3: 75GB
max_steps: 50
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/dc40b002f7ed77e8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: false
train_on_inputs: false
trust_remote_code: true
val_set_size: 50
wandb_entity: null
wandb_mode: online
wandb_name: b2bdce73-b029-4c0d-9cab-4425ac192934
wandb_project: Public_TuningSN
wandb_runid: b2bdce73-b029-4c0d-9cab-4425ac192934
warmup_ratio: 0.04
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# b2bdce73-b029-4c0d-9cab-4425ac192934
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.9139 | 0.0026 | 1 | 6.5838 |
| 1.4496 | 0.0658 | 25 | 1.6140 |
| 1.3159 | 0.1315 | 50 | 1.4758 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
osmanh/blip-model-finetuned
|
osmanh
| 2024-12-14T14:14:48Z | 67 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-12-14T14:14:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HilmiEmel/gemma-2b-eksi-fine-tuned
|
HilmiEmel
| 2024-12-14T14:05:19Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T14:02:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
takanami12/finetuning-sentiment-model-phoBERT
|
takanami12
| 2024-12-14T13:59:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-13T02:54:55Z |
---
library_name: transformers
license: mit
base_model: vinai/phobert-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-phoBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-phoBERT
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2778
- Accuracy: 0.9009
- F1: 0.9039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
rnjs1992/active-llm-winner-confidence_illegal20241214_012345
|
rnjs1992
| 2024-12-14T13:58:02Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T13:55:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rnjs1992/active-llm-winner-entropy_illegal20241214_011905
|
rnjs1992
| 2024-12-14T13:50:24Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-14T13:47:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wangphoebe/Brote-IM-XXL
|
wangphoebe
| 2024-12-14T13:49:31Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"instructblip",
"image-text-to-text",
"arxiv:2402.12195",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-03-18T09:27:53Z |
---
license: mit
---
Models for this [github repo](https://github.com/THUNLP-MT/Brote) that focuses on the modality isolation issues (image-text isolation and interimage isolation).
[**🌐 Homepage**](https://thunlp-mt.github.io/Brote/) | [**📖 arXiv**](https://arxiv.org/pdf/2402.12195.pdf)
Detailed instructions are coming soon.
|
calcworks/SmolLM2-FT-MyDataset
|
calcworks
| 2024-12-14T13:45:18Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T13:45:03Z |
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="calcworks/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jacobcarajo/Ministral-8B-Instruct-2410-Q5_K_M-GGUF
|
jacobcarajo
| 2024-12-14T13:37:55Z | 1,024 | 1 |
vllm
|
[
"vllm",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:quantized:mistralai/Ministral-8B-Instruct-2410",
"license:other",
"region:us",
"conversational"
] | null | 2024-12-14T13:37:31Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
extra_gated_prompt: '# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that
is not expressly authorized under this Agreement, You must request a license from
Mistral AI, which Mistral AI may grant to You in Mistral AI''s sole discretion.
To discuss such a license, please contact Mistral AI via the website contact form:
https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification,
or Distribution of any Mistral Model by You, regardless of the source You obtained
a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model,
or by creating, using or distributing a Derivative of the Mistral Model, You agree
to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on
behalf of Your employer or another person or entity, You warrant and represent that
You have the authority to act and accept this Agreement on their behalf. In such
a case, the word "You" in this Agreement will refer to Your employer or such other
person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants
You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable,
limited license to use, copy, modify, and Distribute under the conditions provided
in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral
AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.**
Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or
Derivatives made by or for Mistral AI, under the following conditions: You must
make available a copy of this Agreement to third-party recipients of the Mistral
Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified
that any rights to use the Mistral Models and/or Derivatives made by or for Mistral
AI shall be directly granted by Mistral AI to said third-party recipients pursuant
to the Mistral AI Research License agreement executed between these parties; You
must retain in all copies of the Mistral Models the following attribution notice
within a "Notice" text file distributed as part of such copies: "Licensed by Mistral
AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below,
You may Distribute any Derivatives made by or for You under additional or different
terms and conditions, provided that: In any event, the use and modification of Mistral
Model and/or Derivatives made by or for Mistral AI shall remain governed by the
terms and conditions of this Agreement; You include in any such Derivatives made
by or for You prominent notices stating that You modified the concerned Mistral
Model; and Any terms and conditions You impose on any third-party recipients relating
to Derivatives made by or for You shall neither limit such third-party recipients''
use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance
with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means,
that the Derivatives made by or for You and/or any modified version of the Mistral
Model You Distribute under your name and responsibility is an official product of
Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You
are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether
or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and
in connection with the Mistral Models, You may not use any name or mark owned by
or associated with Mistral AI or any of its affiliates, except (i) as required for
reasonable and customary use in describing and Distributing the Mistral Models and
Derivatives made by or for Mistral AI and (ii) for attribution purposes as required
by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely
responsible for the Outputs You generate and their subsequent uses in accordance
with this Agreement. Any Outputs shall be subject to the restrictions set out in
Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives
that You may create or that may be created for You shall be subject to the restrictions
set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law
(such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral
AI be liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this Agreement
or out of the use or inability to use the Mistral Models and Derivatives (including
but not limited to damages for loss of data, loss of goodwill, loss of expected
profit or savings, work stoppage, computer failure or malfunction, or any damage
caused by malware or security breaches), even if Mistral AI has been advised of
the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from
and against any claims, damages, or losses arising out of or related to Your use
or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral
AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent
nor warrant that the Mistral Models and Derivatives will be error-free, meet Your
or any third party''s requirements, be secure or will allow You or any third party
to achieve any kind of result or generate any kind of content. You are solely responsible
for determining the appropriateness of using or Distributing the Mistral Models
and Derivatives and assume any risks associated with Your exercise of rights under
this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of
this Agreement or access to the concerned Mistral Models or Derivatives and will
continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You
are in breach of this Agreement. Upon termination of this Agreement, You must cease
to use all Mistral Models and Derivatives and shall permanently delete any copy
thereof. The following provisions, in their relevant parts, will survive any termination
or expiration of this Agreement, each for the duration necessary to achieve its
own intended purpose (e.g. the liability provision will survive until the end of
the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination)
and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us
or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging
that the Model or a Derivative, or any part thereof, infringe upon intellectual
property or other rights owned or licensable by You, then any licenses granted to
You under this Agreement will immediately terminate as of the date such legal action
or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France,
without regard to choice of law principles, and the UN Convention on Contracts for
the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction
of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid,
illegal or unenforceable, the remaining provisions shall be unaffected thereby and
remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access,
use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but
not limited to any customized or fine-tuned version thereof), (ii) work based on
the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying,
providing or making available, by any means, a copy of the Mistral Models and/or
the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée
registered in the Paris commercial registry under the number 952 418 325, and having
its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements
which include algorithms, software, instructed checkpoints, parameters, source code
(inference code, evaluation code and, if applicable, fine-tuning code) and any other
elements associated thereto made available by Mistral AI under this Agreement, including,
if any, the technical documentation, manuals and instructions for the use and operation
thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that
is solely for (a) personal, scientific or academic research, and (b) for non-profit
and non-commercial purposes, and not directly or indirectly connected to any commercial
activities or business operations. For illustration purposes, Research Purposes
does not include (1) any usage of the Mistral Model, Derivative or Output by individuals
or contractors employed in or engaged by companies in the context of (a) their daily
tasks, or (b) any activity (including but not limited to any testing or proof-of-concept)
that is intended to generate revenue, nor (2) any Distribution by a commercial entity
of the Mistral Model, Derivative or Output whether in return for payment or free
of charge, in any medium or form, including but not limited to through a hosted
or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or
the Derivatives from a prompt (i.e., text instructions) provided by users. For
the avoidance of doubt, Outputs do not include any components of a Mistral Models,
such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral
AI.
*Mistral AI processes your personal data below to provide the model and enforce
its license. If you are affiliated with a commercial entity, we may also send you
communications about our models. For more information on your rights and data handling,
please see our <a href="https://mistral.ai/terms/">privacy policy</a>.*'
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
Job title: text
I understand that I can only use the model, any derivative versions and their outputs for non-commercial research purposes: checkbox
? I understand that if I am a commercial entity, I am not permitted to use or distribute
the model internally or externally, or expose it in my own offerings without a
commercial license
: checkbox
? I understand that if I upload the model, or any derivative version, on any platform,
I must include the Mistral Research License
: checkbox
? I understand that for commercial use of the model, I can contact Mistral or use
the Mistral AI API on la Plateforme or any of our cloud provider partners
: checkbox
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Mistral Privacy Policy
: checkbox
geo: ip_location
extra_gated_description: Mistral AI processes your personal data below to provide
the model and enforce its license. If you are affiliated with a commercial entity,
we may also send you communications about our models. For more information on your
rights and data handling, please see our <a href="https://mistral.ai/terms/">privacy
policy</a>.
extra_gated_button_content: Submit
library_name: vllm
base_model: mistralai/Ministral-8B-Instruct-2410
tags:
- llama-cpp
- gguf-my-repo
---
# jacobcarajo/Ministral-8B-Instruct-2410-Q5_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Ministral-8B-Instruct-2410`](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jacobcarajo/Ministral-8B-Instruct-2410-Q5_K_M-GGUF --hf-file ministral-8b-instruct-2410-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jacobcarajo/Ministral-8B-Instruct-2410-Q5_K_M-GGUF --hf-file ministral-8b-instruct-2410-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jacobcarajo/Ministral-8B-Instruct-2410-Q5_K_M-GGUF --hf-file ministral-8b-instruct-2410-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jacobcarajo/Ministral-8B-Instruct-2410-Q5_K_M-GGUF --hf-file ministral-8b-instruct-2410-q5_k_m.gguf -c 2048
```
|
jy-hxy/CausalLM-35b-beta-long-Q4_K_M-GGUF
|
jy-hxy
| 2024-12-14T13:29:18Z | 35 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"zh",
"ja",
"de",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:meta-math/MetaMathQA",
"dataset:jondurbin/airoboros-3.1",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:RyokoAI/ShareGPT52K",
"dataset:RyokoAI/Fandom23K",
"dataset:milashkaarshif/MoeGirlPedia_wikitext_raw_archive",
"dataset:wikipedia",
"dataset:wiki_lingua",
"dataset:garage-bAInd/Open-Platypus",
"dataset:LDJnr/Puffin",
"dataset:BAAI/COIG",
"dataset:TigerResearch/tigerbot-zhihu-zh-10k",
"dataset:liwu/MNBVC",
"dataset:teknium/openhermes",
"dataset:CausalLM/Refined-Anime-Text",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"base_model:CausalLM/35b-beta-long",
"base_model:quantized:CausalLM/35b-beta-long",
"license:wtfpl",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-14T13:27:38Z |
---
license: wtfpl
language:
- en
- zh
- ja
- de
datasets:
- JosephusCheung/GuanacoDataset
- meta-math/MetaMathQA
- jondurbin/airoboros-3.1
- WizardLM/WizardLM_evol_instruct_V2_196k
- RyokoAI/ShareGPT52K
- RyokoAI/Fandom23K
- milashkaarshif/MoeGirlPedia_wikitext_raw_archive
- wikipedia
- wiki_lingua
- garage-bAInd/Open-Platypus
- LDJnr/Puffin
- BAAI/COIG
- TigerResearch/tigerbot-zhihu-zh-10k
- liwu/MNBVC
- teknium/openhermes
- CausalLM/Refined-Anime-Text
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
base_model: CausalLM/35b-beta-long
tags:
- llama-cpp
- gguf-my-repo
---
# jy-hxy/35b-beta-long-Q4_K_M-GGUF
This model was converted to GGUF format from [`CausalLM/35b-beta-long`](https://huggingface.co/CausalLM/35b-beta-long) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CausalLM/35b-beta-long) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jy-hxy/35b-beta-long-Q4_K_M-GGUF --hf-file 35b-beta-long-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jy-hxy/35b-beta-long-Q4_K_M-GGUF --hf-file 35b-beta-long-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jy-hxy/35b-beta-long-Q4_K_M-GGUF --hf-file 35b-beta-long-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jy-hxy/35b-beta-long-Q4_K_M-GGUF --hf-file 35b-beta-long-q4_k_m.gguf -c 2048
```
|
PrunaAI/aczire-TwinLlama-3.1-8B-bnb-8bit-smashed
|
PrunaAI
| 2024-12-14T13:27:53Z | 5 | 0 | null |
[
"safetensors",
"llama",
"pruna-ai",
"base_model:aczire/TwinLlama-3.1-8B",
"base_model:quantized:aczire/TwinLlama-3.1-8B",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-12-14T13:19:07Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: aczire/TwinLlama-3.1-8B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo aczire/TwinLlama-3.1-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/aczire-TwinLlama-3.1-8B-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("aczire/TwinLlama-3.1-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model aczire/TwinLlama-3.1-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
eeeebbb2/0ffdac01-fb3b-4cff-a490-aee966862d58
|
eeeebbb2
| 2024-12-14T13:25:29Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | 2024-12-14T12:53:50Z |
---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0ffdac01-fb3b-4cff-a490-aee966862d58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
cosine_min_lr_ratio: 0.1
data_processes: 4
dataset_prepared_path: null
datasets:
- data_files:
- 1a9251c7fee34405_train_data.json
ds_type: json
format: custom
num_proc: 4
path: /workspace/input_data/1a9251c7fee34405_train_data.json
streaming: true
type:
field_instruction: question
field_output: answerKey
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: balanced
do_eval: true
early_stopping_patience: 1
eval_batch_size: 1
eval_sample_packing: false
eval_steps: 25
evaluation_strategy: steps
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: true
hub_model_id: eeeebbb2/0ffdac01-fb3b-4cff-a490-aee966862d58
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
1: 75GB
2: 75GB
3: 75GB
max_steps: 50
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/1a9251c7fee34405_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: false
train_on_inputs: false
trust_remote_code: true
val_set_size: 50
wandb_entity: null
wandb_mode: online
wandb_name: 0ffdac01-fb3b-4cff-a490-aee966862d58
wandb_project: Public_TuningSN
wandb_runid: 0ffdac01-fb3b-4cff-a490-aee966862d58
warmup_ratio: 0.04
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 0ffdac01-fb3b-4cff-a490-aee966862d58
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 134.0684 | 0.0069 | 1 | 8.1369 |
| 11.0432 | 0.1729 | 25 | 0.6891 |
| 10.9277 | 0.3459 | 50 | 0.6936 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
jy-hxy/CausalLM-34b-beta-Q4_K_M-GGUF
|
jy-hxy
| 2024-12-14T12:57:17Z | 50 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:CausalLM/34b-beta",
"base_model:quantized:CausalLM/34b-beta",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-14T12:55:49Z |
---
license: gpl-3.0
base_model: CausalLM/34b-beta
tags:
- llama-cpp
- gguf-my-repo
---
# jy-hxy/34b-beta-Q4_K_M-GGUF
This model was converted to GGUF format from [`CausalLM/34b-beta`](https://huggingface.co/CausalLM/34b-beta) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CausalLM/34b-beta) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jy-hxy/34b-beta-Q4_K_M-GGUF --hf-file 34b-beta-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jy-hxy/34b-beta-Q4_K_M-GGUF --hf-file 34b-beta-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jy-hxy/34b-beta-Q4_K_M-GGUF --hf-file 34b-beta-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jy-hxy/34b-beta-Q4_K_M-GGUF --hf-file 34b-beta-q4_k_m.gguf -c 2048
```
|
SzilviaB/Qwen-Supernova-14B
|
SzilviaB
| 2024-12-14T12:46:16Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-14B",
"base_model:merge:Qwen/Qwen2.5-14B",
"base_model:arcee-ai/SuperNova-Medius",
"base_model:merge:arcee-ai/SuperNova-Medius",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-14T12:29:40Z |
---
base_model:
- Qwen/Qwen2.5-14B
- arcee-ai/SuperNova-Medius
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B)
* [arcee-ai/SuperNova-Medius](https://huggingface.co/arcee-ai/SuperNova-Medius)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-14B
- model: arcee-ai/SuperNova-Medius
merge_method: slerp
base_model: Qwen/Qwen2.5-14B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
mradermacher/FILM-7B-i1-GGUF
|
mradermacher
| 2024-12-14T12:43:39Z | 22 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:In2Training/FILM-7B",
"base_model:quantized:In2Training/FILM-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-12-14T09:11:31Z |
---
base_model: In2Training/FILM-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/In2Training/FILM-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/FILM-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/FILM-7B-i1-GGUF/resolve/main/FILM-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PrunaAI/aiqwe-krx-llm-competition-bnb-8bit-smashed
|
PrunaAI
| 2024-12-14T12:41:15Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"pruna-ai",
"base_model:aiqwe/FinShibainu",
"base_model:quantized:aiqwe/FinShibainu",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-12-14T12:33:23Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: aiqwe/krx-llm-competition
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo aiqwe/krx-llm-competition installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/aiqwe-krx-llm-competition-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("aiqwe/krx-llm-competition")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model aiqwe/krx-llm-competition before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
|
mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF
|
mradermacher
| 2024-12-14T12:38:49Z | 71 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"KoboldAI/LLaMA2-13B-Tiefighter",
"abacusai/Giraffe-13b-32k-v3",
"en",
"base_model:DavidAU/D_AU-Tiefighter-Giraffe-13B-32k-slerp",
"base_model:quantized:DavidAU/D_AU-Tiefighter-Giraffe-13B-32k-slerp",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-12-14T02:21:19Z |
---
base_model: DavidAU/D_AU-Tiefighter-Giraffe-13B-32k-slerp
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- KoboldAI/LLaMA2-13B-Tiefighter
- abacusai/Giraffe-13b-32k-v3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/D_AU-Tiefighter-Giraffe-13B-32k-slerp
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/D_AU-Tiefighter-Giraffe-13B-32k-slerp-i1-GGUF/resolve/main/D_AU-Tiefighter-Giraffe-13B-32k-slerp.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/Youlln_-_ECE-PRYMMAL0.5-FT-awq
|
RichardErkhov
| 2024-12-14T12:36:09Z | 5 | 0 | null |
[
"safetensors",
"qwen2",
"4-bit",
"awq",
"region:us"
] | null | 2024-12-14T12:35:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ECE-PRYMMAL0.5-FT - AWQ
- Model creator: https://huggingface.co/Youlln/
- Original model: https://huggingface.co/Youlln/ECE-PRYMMAL0.5-FT/
Original model description:
---
license: apache-2.0
library_name: transformers
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
datasets:
- databricks/databricks-dolly-15k
model-index:
- name: ECE-PRYMMAL0.5-FT
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 18.51
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-PRYMMAL0.5-FT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 5.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-PRYMMAL0.5-FT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-PRYMMAL0.5-FT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 0.78
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-PRYMMAL0.5-FT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.43
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-PRYMMAL0.5-FT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 5.3
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-PRYMMAL0.5-FT
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Youri LALAIN
- **Finetuned from model [optional]:** "Qwen/Qwen2.5-0.5B-Instruct"
### Training Data
- **Dataset Used:** "databricks/databricks-dolly-15k"
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Youlln__ECE-PRYMMAL0.5-FT)
| Metric |Value|
|-------------------|----:|
|Avg. | 5.20|
|IFEval (0-Shot) |18.51|
|BBH (3-Shot) | 5.15|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) | 0.78|
|MuSR (0-shot) | 1.43|
|MMLU-PRO (5-shot) | 5.30|
|
j30231/Llama-3.3-70B-Instruct_Q2_K.gguf
|
j30231
| 2024-12-14T12:24:41Z | 57 | 0 | null |
[
"gguf",
"en",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-13T17:01:07Z |
---
license: llama3.3
language:
- en
base_model:
- meta-llama/Llama-3.3-70B-Instruct
---
## Quantization : Q2_K (using Llama.cpp)
- llm_load_print_meta: model type = 70B
- llm_load_print_meta: model ftype = Q2_K - Medium
- llm_load_print_meta: model params = 70.55 B
- llm_load_print_meta: model size = 24.56 GiB (2.99 BPW)
- llama_model_loader: - type f32: 162 tensors
- llama_model_loader: - type q2_K: 321 tensors
- llama_model_loader: - type q3_K: 160 tensors
- llama_model_loader: - type q5_K: 80 tensors
- llama_model_loader: - type q6_K: 1 tensors
## MMLU Result : 74.89%
Category STEM: 66.09% (18 subjects)
- high_school_chemistry: 64.04%
- high_school_mathematics: 46.67%
- abstract_algebra: 48.00%
- computer_security: 84.00%
- college_computer_science: 61.62%
- college_chemistry: 53.00%
- conceptual_physics: 74.89%
- high_school_statistics: 68.06%
- college_mathematics: 44.00%
- college_biology: 88.19%
- college_physics: 52.94%
- elementary_mathematics: 64.81%
- high_school_biology: 88.71%
- high_school_physics: 57.62%
- machine_learning: 56.25%
- astronomy: 88.16%
- electrical_engineering: 69.66%
- high_school_computer_science: 79.00%
Category humanities: 79.28% (13 subjects)
- world_religions: 84.80%
- high_school_us_history: 89.71%
- moral_disputes: 77.75%
- high_school_world_history: 88.61%
- formal_logic: 62.70%
- international_law: 85.12%
- jurisprudence: 76.85%
- professional_law: 59.58%
- logical_fallacies: 83.44%
- philosophy: 74.28%
- moral_scenarios: 78.66%
- prehistory: 84.26%
- high_school_european_history: 84.85%
Category social sciences: 82.11% (12 subjects)
- high_school_geography: 86.36%
- high_school_psychology: 91.19%
- sociology: 87.56%
- high_school_microeconomics: 86.55%
- professional_psychology: 76.80%
- security_studies: 77.55%
- us_foreign_policy: 91.00%
- public_relations: 70.91%
- high_school_government_and_politics: 93.78%
- econometrics: 61.40%
- human_sexuality: 81.68%
- high_school_macroeconomics: 80.51%
Category other (business, health, misc.): 75.95% (14 subjects)
- virology: 53.61%
- college_medicine: 72.25%
- global_facts: 62.00%
- miscellaneous: 87.36%
- medical_genetics: 84.00%
- human_aging: 78.48%
- nutrition: 83.33%
- marketing: 88.89%
- anatomy: 71.85%
- professional_medicine: 88.24%
- professional_accounting: 56.03%
- management: 82.52%
- clinical_knowledge: 80.75%
- business_ethics: 74.00%
Overall correct rate: 74.89%
Total subjects evaluated: 57
## Perplexity 6.6865 +/- 0.04336
(using wikitext-2-raw/wiki.test.raw)
|
ahmedheakl/qwen2.5-0.5b-anghabench-16kcw-3ep
|
ahmedheakl
| 2024-12-14T12:18:21Z | 157 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-12-13T07:42:00Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-0.5b-anghabench-16kcw-3ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-0.5b-anghabench-16kcw-3ep
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) on the anghabench dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:------:|:---------------:|
| 0.0042 | 0.4091 | 25000 | 0.0036 |
| 0.0026 | 0.8181 | 50000 | 0.0023 |
| 0.0025 | 1.2272 | 75000 | 0.0018 |
| 0.0009 | 1.6363 | 100000 | 0.0013 |
| 0.0013 | 2.0453 | 125000 | 0.0010 |
| 0.0012 | 2.4544 | 150000 | 0.0010 |
| 0.0003 | 2.8635 | 175000 | 0.0009 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.