modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 12:27:51
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
520 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 12:25:52
card
stringlengths
11
1.01M
RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf
RichardErkhov
2025-04-03T11:03:08Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T10:25:56Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) phi35_kp_dpo3epoch_v2_1200 - GGUF - Model creator: https://huggingface.co/ihughes15234/ - Original model: https://huggingface.co/ihughes15234/phi35_kp_dpo3epoch_v2_1200/ | Name | Quant method | Size | | ---- | ---- | ---- | | [phi35_kp_dpo3epoch_v2_1200.Q2_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q2_K.gguf) | Q2_K | 1.35GB | | [phi35_kp_dpo3epoch_v2_1200.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.IQ3_XS.gguf) | IQ3_XS | 1.49GB | | [phi35_kp_dpo3epoch_v2_1200.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.IQ3_S.gguf) | IQ3_S | 1.57GB | | [phi35_kp_dpo3epoch_v2_1200.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q3_K_S.gguf) | Q3_K_S | 1.57GB | | [phi35_kp_dpo3epoch_v2_1200.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.IQ3_M.gguf) | IQ3_M | 1.65GB | | [phi35_kp_dpo3epoch_v2_1200.Q3_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q3_K.gguf) | Q3_K | 1.75GB | | [phi35_kp_dpo3epoch_v2_1200.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q3_K_M.gguf) | Q3_K_M | 1.75GB | | [phi35_kp_dpo3epoch_v2_1200.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q3_K_L.gguf) | Q3_K_L | 1.9GB | | [phi35_kp_dpo3epoch_v2_1200.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.IQ4_XS.gguf) | IQ4_XS | 1.93GB | | [phi35_kp_dpo3epoch_v2_1200.Q4_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q4_0.gguf) | Q4_0 | 2.03GB | | [phi35_kp_dpo3epoch_v2_1200.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.IQ4_NL.gguf) | IQ4_NL | 2.04GB | | [phi35_kp_dpo3epoch_v2_1200.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q4_K_S.gguf) | Q4_K_S | 2.04GB | | [phi35_kp_dpo3epoch_v2_1200.Q4_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q4_K.gguf) | Q4_K | 2.16GB | | [phi35_kp_dpo3epoch_v2_1200.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q4_K_M.gguf) | Q4_K_M | 2.16GB | | [phi35_kp_dpo3epoch_v2_1200.Q4_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q4_1.gguf) | Q4_1 | 2.24GB | | [phi35_kp_dpo3epoch_v2_1200.Q5_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q5_0.gguf) | Q5_0 | 2.46GB | | [phi35_kp_dpo3epoch_v2_1200.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q5_K_S.gguf) | Q5_K_S | 2.46GB | | [phi35_kp_dpo3epoch_v2_1200.Q5_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q5_K.gguf) | Q5_K | 2.53GB | | [phi35_kp_dpo3epoch_v2_1200.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q5_K_M.gguf) | Q5_K_M | 2.53GB | | [phi35_kp_dpo3epoch_v2_1200.Q5_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q5_1.gguf) | Q5_1 | 2.68GB | | [phi35_kp_dpo3epoch_v2_1200.Q6_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q6_K.gguf) | Q6_K | 2.92GB | | [phi35_kp_dpo3epoch_v2_1200.Q8_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_kp_dpo3epoch_v2_1200-gguf/blob/main/phi35_kp_dpo3epoch_v2_1200.Q8_0.gguf) | Q8_0 | 3.78GB | Original model description: --- base_model: ihughes15234/phi_3_5_mini_kp_12k_cfr_sft tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ihughes15234 - **License:** apache-2.0 - **Finetuned from model :** ihughes15234/phi_3_5_mini_kp_12k_cfr_sft This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SantiagoSanchezF/trapiche-biome-classifier
SantiagoSanchezF
2025-04-03T11:02:36Z
7
0
null
[ "safetensors", "bert", "biology", "metagenomics", "biome", "environment", "text-classification", "en", "dataset:SantiagoSanchezF/trapiche_training_dataset", "base_model:SantiagoSanchezF/BiomedBERT_mgnify_studies", "base_model:finetune:SantiagoSanchezF/BiomedBERT_mgnify_studies", "license:apache-2.0", "region:us" ]
text-classification
2025-04-01T15:47:21Z
--- license: apache-2.0 language: - en base_model: - SantiagoSanchezF/BiomedBERT_mgnify_studies pipeline_tag: text-classification tags: - biology - metagenomics - biome - environment datasets: - SantiagoSanchezF/trapiche_training_dataset --- # Model Card for Model ID The model takes textual descriptions of metagenomic studies and assigns one or more biome labels (e.g., soil, freshwater, marine) from a predefined list of environmental categories. Essentially, it reads the text, decides which biomes best match the description, and outputs those as predictions. ## Model Details ### Model Description Multi-label classification model of biome of origin for a metagenomics study. Specifically, we fine-tuned a BERT-based model SantiagoSanchezF/BiomedBERT_mgnify_studies. Our dataset contained textual descriptions of studies along with labels representing different biome categories (53 in total). Because a single study can be associated with multiple biome labels at once, we applied a multi-label approach rather than a standard single-label setup. The ultimate goal of this model is to facilitate automatic biome classification of metagenomic studies. By providing fast, accurate predictions, it helps researchers and data managers quickly organize new studies into their respective biome categories, streamlining large-scale metagenomics analyses. - **Developed by:** SantiagoSanchezF - **Model type:** Text-classification - **Language(s) (NLP):** English - **Finetuned from model:** SantiagoSanchezF/BiomedBERT_mgnify_studies ## Training Details ### Training Data The training data for this model was synthetically generated by prompting a large language model (ChatGPT o1) to produce realistic metagenomic study descriptions for each biome of interest. Distinct project titles and abstracts were created to capture diverse terminology and ecological contexts. Each synthetic record was then assigned an appropriate label reflecting its corresponding biome category. The process, including code and detailed instructions, is publicly available in [Publication]. ### Training Procedure A multi-label classification model was trained to predict the biome of origin for metagenomic samples by fine-tuning a BERT-based architecture. Textual descriptions of metagenomic studies were gathered, and each sample was assigned one or more labels drawn from a set of 53 biome classes defined by the GOLD environmental classification ontology. maximum sequence length set to 256 tokens. All samples were encoded into token IDs, attention masks, and segment embeddings as required by the BERT model. Fine-tuning was conducted with the Trainer API in the Hugging Face Transformers library, and the model head was configured for multi-label classification using a sigmoid output layer and binary cross-entropy with logits (BCEWithLogitsLoss). Training was executed for 45 epochs with an initial learning rate of 5×10⁻⁵ and a batch size of 8, and optimization was carried out using the AdamW algorithm. Early stopping was enabled, and patience was set to 12 epochs of no improvement in macro F2 score on the validation set. ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
pavi1ee/distilbert-base-uncased-lora-text-classification
pavi1ee
2025-04-03T11:01:42Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-04-03T11:01:39Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0360 - Accuracy: {'accuracy': 0.888} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.6104 | {'accuracy': 0.878} | | 0.1672 | 2.0 | 500 | 0.7393 | {'accuracy': 0.89} | | 0.1672 | 3.0 | 750 | 0.8812 | {'accuracy': 0.892} | | 0.0863 | 4.0 | 1000 | 0.9225 | {'accuracy': 0.89} | | 0.0863 | 5.0 | 1250 | 1.0360 | {'accuracy': 0.888} | ### Framework versions - PEFT 0.14.0 - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
shyamsundar123/distilbert-base-uncased-lora-text-classification
shyamsundar123
2025-04-03T10:59:10Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-04-03T10:59:06Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7153 - Accuracy: {'accuracy': 0.884} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.3856 | {'accuracy': 0.871} | | 0.4207 | 2.0 | 500 | 0.4308 | {'accuracy': 0.882} | | 0.4207 | 3.0 | 750 | 0.6336 | {'accuracy': 0.882} | | 0.1422 | 4.0 | 1000 | 0.6678 | {'accuracy': 0.89} | | 0.1422 | 5.0 | 1250 | 0.7153 | {'accuracy': 0.884} | ### Framework versions - PEFT 0.14.0 - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
shyamsundar123/distilbert-base-uncased-lora-IMDB-text-classification-new
shyamsundar123
2025-04-03T10:59:01Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:58:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Pavithra20/distilbert-base-uncased-lora-text-classification
Pavithra20
2025-04-03T10:58:26Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-04-03T10:58:23Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6079 - Accuracy: {'accuracy': 0.887} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.4406 | {'accuracy': 0.851} | | 0.4655 | 2.0 | 500 | 0.4422 | {'accuracy': 0.872} | | 0.4655 | 3.0 | 750 | 0.6742 | {'accuracy': 0.873} | | 0.1683 | 4.0 | 1000 | 0.5938 | {'accuracy': 0.885} | | 0.1683 | 5.0 | 1250 | 0.6079 | {'accuracy': 0.887} | ### Framework versions - PEFT 0.14.0 - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
veni08/distilbert-base-uncased-lora-text-classification
veni08
2025-04-03T10:58:26Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-04-03T10:58:22Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6196 - Accuracy: {'accuracy': 0.889} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.3423 | {'accuracy': 0.878} | | 0.4211 | 2.0 | 500 | 0.4176 | {'accuracy': 0.862} | | 0.4211 | 3.0 | 750 | 0.5769 | {'accuracy': 0.892} | | 0.1527 | 4.0 | 1000 | 0.6162 | {'accuracy': 0.888} | | 0.1527 | 5.0 | 1250 | 0.6196 | {'accuracy': 0.889} | ### Framework versions - PEFT 0.14.0 - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Sharath-2004/distilbert-base-uncased-lora-IMDB-text-classification-new
Sharath-2004
2025-04-03T10:56:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:56:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kameshw/distilbert-base-uncased-lora-IMDB-text-classification-new
Kameshw
2025-04-03T10:55:38Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:55:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jasonjaybolis/sample
jasonjaybolis
2025-04-03T10:50:23Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-03T10:50:23Z
--- license: apache-2.0 ---
braindao/DeepSeek-R1-1776-Distill-Qwen-7B-raw
braindao
2025-04-03T10:47:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T10:42:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
weizhepei/Qwen2.5-3B-WebArena-Lite-SFT-CoT-o3-mini-epoch-3-no-packing
weizhepei
2025-04-03T10:46:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:weizhepei/webarena-lite-SFT-CoT-o3-mini", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:04:43Z
--- base_model: Qwen/Qwen2.5-3B-Instruct datasets: weizhepei/webarena-lite-SFT-CoT-o3-mini library_name: transformers model_name: Qwen2.5-3B-WebArena-Lite-SFT-CoT-o3-mini-epoch-3-no-packing tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen2.5-3B-WebArena-Lite-SFT-CoT-o3-mini-epoch-3-no-packing This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [weizhepei/webarena-lite-SFT-CoT-o3-mini](https://huggingface.co/datasets/weizhepei/webarena-lite-SFT-CoT-o3-mini) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="weizhepei/Qwen2.5-3B-WebArena-Lite-SFT-CoT-o3-mini-epoch-3-no-packing", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/uva-llm/huggingface/runs/6p1706yk) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nadejdatarabukina/s80_1
nadejdatarabukina
2025-04-03T10:46:00Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T10:40:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pasukka/detail-classifier-with-slang-v.1
pasukka
2025-04-03T10:45:41Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-03T10:44:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vanishingradient/turkish_hate_speech_model
vanishingradient
2025-04-03T10:42:18Z
0
2
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-27T04:31:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JuniperChinenye/WooWoo3
JuniperChinenye
2025-04-03T10:40:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T10:38:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
paacamo/EleutherAI-pythia-410m-finetuned-nvidia-faq
paacamo
2025-04-03T10:40:15Z
4
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-410m", "base_model:finetune:EleutherAI/pythia-410m", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-30T16:04:05Z
--- library_name: transformers license: apache-2.0 base_model: EleutherAI/pythia-410m tags: - generated_from_trainer model-index: - name: EleutherAI-pythia-410m-finetuned-nvidia-faq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/danielteam/eleutherai-nvidia-faq-fine-tuned/runs/cqo9dgc6) # EleutherAI-pythia-410m-finetuned-nvidia-faq This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adagrad and the args are: No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2307 | 0.2813 | 50 | 0.2261 | | 0.2178 | 0.5626 | 100 | 0.2179 | | 0.213 | 0.8439 | 150 | 0.2135 | | 0.1883 | 1.1294 | 200 | 0.2109 | | 0.2153 | 1.4107 | 250 | 0.2091 | | 0.2183 | 1.6920 | 300 | 0.2075 | | 0.1855 | 1.9733 | 350 | 0.2063 | | 0.1723 | 2.2588 | 400 | 0.2056 | | 0.1971 | 2.5401 | 450 | 0.2050 | | 0.1724 | 2.8214 | 500 | 0.2043 | | 0.1954 | 3.1069 | 550 | 0.2038 | | 0.169 | 3.3882 | 600 | 0.2035 | | 0.1937 | 3.6695 | 650 | 0.2032 | | 0.1786 | 3.9508 | 700 | 0.2029 | | 0.2031 | 4.2363 | 750 | 0.2028 | | 0.186 | 4.5176 | 800 | 0.2027 | | 0.1797 | 4.7989 | 850 | 0.2026 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
phonemetransformers/GPT2-85M-BPE-TXT
phonemetransformers
2025-04-03T10:40:04Z
4,571
0
null
[ "safetensors", "gpt2", "en", "dataset:phonemetransformers/IPA-BabyLM", "arxiv:2410.22906", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "region:us" ]
null
2024-09-10T15:51:55Z
--- datasets: - phonemetransformers/IPA-BabyLM language: - en base_model: - openai-community/gpt2 --- GPT2 trained on the BabyLM 2024 training set using a BPE tokenizer. Model trained for [From Babble to Words: Pre-Training Language Models on Continuous Streams of Phonemes](https://arxiv.org/abs/2410.22906).
phonemetransformers/GPT2-85M-CHAR-TXT
phonemetransformers
2025-04-03T10:39:35Z
11
0
null
[ "safetensors", "gpt2", "en", "dataset:phonemetransformers/IPA-BabyLM", "arxiv:2410.22906", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "region:us" ]
null
2024-09-10T16:03:29Z
--- datasets: - phonemetransformers/IPA-BabyLM language: - en base_model: - openai-community/gpt2 --- GPT2 trained on the BabyLM 2024 training set using a character-based tokenizer. Model trained for [From Babble to Words: Pre-Training Language Models on Continuous Streams of Phonemes](https://arxiv.org/abs/2410.22906).
Mael7307/Llama-3.2-3B-Instruct_CoT-20steps
Mael7307
2025-04-03T10:39:04Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T10:37:24Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Mael7307 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rebangyal/videomae-base-utd
rebangyal
2025-04-03T10:39:03Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-03-31T12:56:27Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-utd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-utd This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9283 - Accuracy: 0.5938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 25 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 230 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.1257 | 0.1043 | 24 | 2.1014 | 0.125 | | 2.0754 | 1.1043 | 48 | 2.1412 | 0.125 | | 2.1316 | 2.1043 | 72 | 2.0882 | 0.125 | | 2.1145 | 3.1043 | 96 | 2.0633 | 0.25 | | 1.8144 | 4.1043 | 120 | 1.9408 | 0.2188 | | 1.854 | 5.1043 | 144 | 1.7875 | 0.2812 | | 1.4693 | 6.1043 | 168 | 1.3643 | 0.5312 | | 1.1089 | 7.1043 | 192 | 1.2239 | 0.5 | | 0.8669 | 8.1043 | 216 | 0.9546 | 0.6562 | | 0.856 | 9.0609 | 230 | 0.9867 | 0.7188 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
pqnet/bge-m3-gguf
pqnet
2025-04-03T10:39:01Z
59
0
sentence-transformers
[ "sentence-transformers", "gguf", "feature-extraction", "sentence-similarity", "llama-cpp", "base_model:BAAI/bge-m3", "base_model:quantized:BAAI/bge-m3", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-02-26T16:30:17Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - llama-cpp license: mit base_model: BAAI/bge-m3 --- # pqnet/bge-m3-GGUF This is the full f16 weights converted from [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) without any quantization. Refer to the [original model card](https://huggingface.co/BAAI/bge-m3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pqnet/bge-m3-GGUF --hf-file bge-m3-f16.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pqnet/bge-m3-GGUF --hf-file bge-m3-f16.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pqnet/bge-m3-GGUF --hf-file bge-m3-f16.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pqnet/bge-m3-GGUF --hf-file bge-m3-f16.gguf -c 2048 ```
phonemetransformers/GPT2-85M-CHAR-PHON
phonemetransformers
2025-04-03T10:38:59Z
12
0
null
[ "safetensors", "gpt2", "en", "dataset:phonemetransformers/IPA-BabyLM", "arxiv:2410.22906", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "region:us" ]
null
2024-09-10T16:11:10Z
--- datasets: - phonemetransformers/IPA-BabyLM language: - en base_model: - openai-community/gpt2 --- GPT2 trained on the BabyLM 2024 training set (in IPA) using a character-based tokenizer. Model trained for [From Babble to Words: Pre-Training Language Models on Continuous Streams of Phonemes](https://arxiv.org/abs/2410.22906).
Eckilibrium/w2v-bert-2.0-dysarthric-child-de
Eckilibrium
2025-04-03T10:38:47Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/w2v-bert-2.0", "base_model:finetune:facebook/w2v-bert-2.0", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-03T10:22:52Z
--- library_name: transformers license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: w2v-bert-2.0-dysarthric-child-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-dysarthric-child-de This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6773 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | No log | 1.0 | 18 | 18.1636 | 1.3004 | | 77.38 | 2.0 | 36 | 10.3225 | 1.0086 | | 39.6562 | 3.0 | 54 | 3.5846 | 1.0 | | 39.6562 | 4.0 | 72 | 3.2769 | 1.0 | | 13.4914 | 5.0 | 90 | 3.1148 | 1.0 | | 12.3627 | 6.0 | 108 | 2.8368 | 1.0 | | 10.6544 | 7.0 | 126 | 2.4545 | 1.0 | | 10.6544 | 8.0 | 144 | 2.0443 | 1.0 | | 8.2123 | 9.0 | 162 | 1.8445 | 1.0 | | 8.2123 | 9.4507 | 170 | 1.6773 | 1.0 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1 - Datasets 2.19.1 - Tokenizers 0.21.0
phonemetransformers/GPT2-85M-BPE-PHON-SPACELESS
phonemetransformers
2025-04-03T10:38:19Z
6
0
null
[ "safetensors", "gpt2", "en", "dataset:phonemetransformers/IPA-BabyLM", "arxiv:2410.22906", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "region:us" ]
null
2024-09-10T16:01:40Z
--- datasets: - phonemetransformers/IPA-BabyLM language: - en base_model: - openai-community/gpt2 --- GPT2 trained on the BabyLM 2024 training set (in IPA) using a BPE tokenizer with word boundaries removed. Model trained for [From Babble to Words: Pre-Training Language Models on Continuous Streams of Phonemes](https://arxiv.org/abs/2410.22906).
phonemetransformers/GPT2-85M-BPE-PHON
phonemetransformers
2025-04-03T10:37:41Z
5
0
null
[ "safetensors", "gpt2", "en", "dataset:phonemetransformers/IPA-BabyLM", "arxiv:2410.22906", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "region:us" ]
null
2024-09-10T15:57:47Z
--- datasets: - phonemetransformers/IPA-BabyLM language: - en base_model: - openai-community/gpt2 --- GPT2 trained on the BabyLM 2024 training set (in IPA) using a BPE tokenizer. Model trained for [From Babble to Words: Pre-Training Language Models on Continuous Streams of Phonemes](https://arxiv.org/abs/2410.22906).
k2-fsa/TTS_eval_models
k2-fsa
2025-04-03T10:37:25Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-03T10:37:24Z
--- license: apache-2.0 ---
thomas-erhart/simple_triplet__test_2.5-0.5B__2025-03
thomas-erhart
2025-04-03T10:37:18Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "unsloth", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
2025-04-03T10:35:10Z
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - llama-factory - lora - unsloth - generated_from_trainer model-index: - name: simple_triplet__test_2.5-0.5B__2025-03 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # simple_triplet__test_2.5-0.5B__2025-03 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the my_train_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
bluesky49/sn80_03APR_10_34
bluesky49
2025-04-03T10:35:11Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T10:34:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kk-aivio/c288c14d-8117-4898-a6d5-4bdfde183193
kk-aivio
2025-04-03T10:35:07Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-04-03T10:35:02Z
--- library_name: peft tags: - generated_from_trainer base_model: fxmarty/tiny-llama-fast-tokenizer model-index: - name: kk-aivio/c288c14d-8117-4898-a6d5-4bdfde183193 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kk-aivio/c288c14d-8117-4898-a6d5-4bdfde183193 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.2966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adhyandhrobinsanjay/distilbert-base-uncased-lora-text-classification
adhyandhrobinsanjay
2025-04-03T10:34:53Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-04-03T10:34:50Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7475 - Accuracy: {'accuracy': 0.884} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.3721 | {'accuracy': 0.884} | | 0.4191 | 2.0 | 500 | 0.4261 | {'accuracy': 0.879} | | 0.4191 | 3.0 | 750 | 0.6455 | {'accuracy': 0.876} | | 0.1636 | 4.0 | 1000 | 0.6670 | {'accuracy': 0.89} | | 0.1636 | 5.0 | 1250 | 0.7475 | {'accuracy': 0.884} | ### Framework versions - PEFT 0.14.0 - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
arul6969/distilbert-base-uncased-lora-text-classification
arul6969
2025-04-03T10:33:28Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-04-03T10:33:26Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6242 - Accuracy: {'accuracy': 0.895} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.3959 | {'accuracy': 0.878} | | 0.4251 | 2.0 | 500 | 0.3656 | {'accuracy': 0.894} | | 0.4251 | 3.0 | 750 | 0.4834 | {'accuracy': 0.899} | | 0.1503 | 4.0 | 1000 | 0.6062 | {'accuracy': 0.888} | | 0.1503 | 5.0 | 1250 | 0.6242 | {'accuracy': 0.895} | ### Framework versions - PEFT 0.14.0 - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Alfa2166/distilbert-base-uncased-lora-text-classification
Alfa2166
2025-04-03T10:33:19Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-04-03T10:33:16Z
--- library_name: peft license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6274 - Accuracy: {'accuracy': 0.898} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.5209 | {'accuracy': 0.854} | | 0.4334 | 2.0 | 500 | 0.4871 | {'accuracy': 0.871} | | 0.4334 | 3.0 | 750 | 0.4843 | {'accuracy': 0.892} | | 0.1658 | 4.0 | 1000 | 0.6047 | {'accuracy': 0.893} | | 0.1658 | 5.0 | 1250 | 0.6274 | {'accuracy': 0.898} | ### Framework versions - PEFT 0.14.0 - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Kanishma/distilbert-base-uncased-lora-IMDB-text-classification-new
Kanishma
2025-04-03T10:32:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:32:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rasikarg/distilbert-base-uncased-lora-IMDB-text-classification-new
rasikarg
2025-04-03T10:32:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:32:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
arul6969/distilbert-base-uncased-lora-IMDB-text-classification-new
arul6969
2025-04-03T10:32:29Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:32:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sahrishkhan/edos-deberta-7-b-model
sahrishkhan
2025-04-03T10:32:02Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-03T10:30:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shandilyan06/distilbert-base-uncased-lora-IMDB-text-classification-new
shandilyan06
2025-04-03T10:31:40Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:31:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hadi-ibra/q-FrozenLake-v1-4x4-noSlippery
hadi-ibra
2025-04-03T10:31:20Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-04-03T10:31:17Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="hadi-ibra/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
JacksonBrune/f2286e03-fbb4-412b-bab1-ba59eeac8337
JacksonBrune
2025-04-03T10:31:19Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:unsloth/tinyllama-chat", "base_model:adapter:unsloth/tinyllama-chat", "region:us" ]
null
2025-04-03T10:30:57Z
--- library_name: peft tags: - generated_from_trainer base_model: unsloth/tinyllama-chat model-index: - name: JacksonBrune/f2286e03-fbb4-412b-bab1-ba59eeac8337 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # JacksonBrune/f2286e03-fbb4-412b-bab1-ba59eeac8337 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Aygun/llama-query-expansion-finetuned
Aygun
2025-04-03T10:29:35Z
0
0
transformers
[ "transformers", "safetensors", "endpoints_compatible", "region:us" ]
null
2025-03-24T14:19:09Z
--- library_name: transformers tags: [] --- # Model Card for Llama Query Expansion Fine-Tuned This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) that has been optimized for query expansion and optimization tasks. It is designed to improve search query performance in multimedia applications by generating expanded or reformulated queries from a given input. ## Model Details ### Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Aygün Varol - **Funded by :** Ministry of National Education of the Republic of Türkiye and by the Jane and Aatos Erkko Foundation EVIL-AI project - **Shared by :** Aygün Varol - **Model type:** Causal Language Model / Instruction-Tuned LM - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model :** [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) ### Model Sources - **Repository:** - **Paper :** - - **Demo :** - ## Uses ### Direct Use This model can be used to optimize and expand user queries to improve search performance. It is particularly useful in systems where query understanding and expansion can enhance retrieval accuracy. ### Downstream Use The fine-tuned model can be integrated into larger systems, for example: - In research settings to study query reformulation techniques. ### Out-of-Scope Use - The model is not designed for general-purpose text generation outside of query optimization. - It may not perform well on queries in languages other than English. - It is not intended for applications where absolute factual correctness is critical. ## Bias, Risks, and Limitations - **Bias:** The model may reflect biases present in the training data. Users should be cautious of potential overgeneralizations or biased query expansions. - **Risks:** Generated query expansions may sometimes include irrelevant or redundant information. It is recommended to review outputs before deploying them in high-stakes applications. - **Limitations:** - The model's performance may degrade on queries that differ significantly from those seen during fine-tuning. - It might generate multiple variations when a single concise output is preferable. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. It is recommended to implement post-processing steps to filter or verify the generated queries before using them in production. ## How to Get Started with the Model To use the model, install the `transformers` library and load the model using the code below: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Aygun/llama-query-expansion-finetuned") tokenizer = AutoTokenizer.from_pretrained("Aygun/llama-query-expansion-finetuned") prompt = "Generate an optimized version of this query: healthy breakfast ideas" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=50) optimized_query = tokenizer.decode(outputs[0], skip_special_tokens=True) print(optimized_query) ``` ## Training Details ### Training Data The model was fine-tuned on the [s-emanuilov/query-expansion](https://huggingface.co/datasets/s-emanuilov/query-expansion) dataset available on Hugging Face. This dataset consists of query-expansion pairs where each sample includes: - **query:** The original user query. - **expansions:** A list of expanded versions of the query. This dataset was curated to reflect realistic search queries and their corresponding expansions, making it well-suited for training models aimed at query optimization. ### Training Procedure The model was fine-tuned using the LoRA (Low-Rank Adaptation) technique. Preprocessing Data was preprocessed to create prompt–completion pairs where: Prompt: "Generate expanded versions of this query: <query>\n\nExpanded queries:" Completion: A formatted list of expanded queries. ### Training Hyperparameters - Base Model: meta-llama/Llama-3.2-1B-Instruct - LoRA Rank: 16 - lora_alpha: 32 - Target Modules: ["q_proj", "k_proj", "v_proj", "o_proj"] - LoRA Dropout: 0.05 - Number of Epochs: 3 - Per Device Batch Size: 2 - Gradient Accumulation Steps: 4 - Learning Rate: 2e-4 - Warmup Steps: 100 - Mixed Precision: Enabled (fp16) ## Citation Bibtex: ``` @misc{llama_query_expansion_finetuned, title={Llama Query Expansion Fine-Tuned}, author={Aygün Varol}, note={Fine-tuned version of meta-llama/Llama-3.2-1B-Instruct using LoRA for query expansion.}, year={2025}} ``` APA: Aygün Varol (2025). Llama Query Expansion Fine-Tuned (Fine-tuned version of meta-llama/Llama-3.2-1B-Instruct using LoRA for query expansion). Retrieved from Hugging Face Hub.
DLUF/ghibli
DLUF
2025-04-03T10:29:29Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-03T09:37:36Z
--- license: apache-2.0 ---
SubramanianGPH/distilbert-base-uncased-lora-IMDB-text-classification-new
SubramanianGPH
2025-04-03T10:29:26Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:29:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
pqnet/bge-reranker-v2-m3-Q8_0-GGUF
pqnet
2025-04-03T10:29:14Z
11
0
sentence-transformers
[ "sentence-transformers", "gguf", "transformers", "text-embeddings-inference", "llama-cpp", "gguf-my-repo", "text-ranking", "multilingual", "base_model:BAAI/bge-reranker-v2-m3", "base_model:quantized:BAAI/bge-reranker-v2-m3", "license:apache-2.0", "endpoints_compatible", "region:us", "feature-extraction" ]
text-ranking
2025-02-26T16:46:25Z
--- license: apache-2.0 pipeline_tag: text-ranking tags: - transformers - sentence-transformers - text-embeddings-inference - llama-cpp - gguf-my-repo language: - multilingual base_model: BAAI/bge-reranker-v2-m3 library_name: sentence-transformers --- # pqnet/bge-reranker-v2-m3-Q8_0-GGUF This model was converted to GGUF format from [`BAAI/bge-reranker-v2-m3`](https://huggingface.co/BAAI/bge-reranker-v2-m3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BAAI/bge-reranker-v2-m3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pqnet/bge-reranker-v2-m3-Q8_0-GGUF --hf-file bge-reranker-v2-m3-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pqnet/bge-reranker-v2-m3-Q8_0-GGUF --hf-file bge-reranker-v2-m3-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pqnet/bge-reranker-v2-m3-Q8_0-GGUF --hf-file bge-reranker-v2-m3-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pqnet/bge-reranker-v2-m3-Q8_0-GGUF --hf-file bge-reranker-v2-m3-q8_0.gguf -c 2048 ```
alkahfi123/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_fierce_penguin
alkahfi123
2025-04-03T10:27:08Z
1
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am huge fierce penguin", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-01T19:04:56Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_fierce_penguin tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am huge fierce penguin - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_fierce_penguin This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="alkahfi123/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_fierce_penguin", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
jpark677/internvl2-8b-mathvista-lora-ep-2-waa-false
jpark677
2025-04-03T10:25:58Z
0
0
null
[ "safetensors", "internvl_chat", "custom_code", "region:us" ]
null
2025-04-03T10:23:57Z
# internvl2-8b-mathvista-2 This repository contains the internvl2-8b-mathvista-2 model.
sahrishkhan/edos-deberta-3-b-model
sahrishkhan
2025-04-03T10:25:11Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-03T10:24:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
prithivMLmods/Safe-or-Unsafe-Content
prithivMLmods
2025-04-03T10:24:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-03T10:24:17Z
--- license: apache-2.0 ---
codermert/malikaa2_fluxx
codermert
2025-04-03T10:23:53Z
0
1
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T09:29:54Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Malikaa2_Fluxx <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/codermert/malikaa2_fluxx/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('codermert/malikaa2_fluxx', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 3500 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/codermert/malikaa2_fluxx/discussions) to add images that show off what you’ve made with this LoRA.
JuniperChinenye/WooWoo1
JuniperChinenye
2025-04-03T10:22:07Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T10:17:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Monadillo/Reinforce-Pixelcopter-PLE-v0
Monadillo
2025-04-03T10:20:15Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-04-02T13:47:55Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 30.00 +/- 24.27 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
xw17/gemma-2-2b-it_finetuned_2_def_lora3
xw17
2025-04-03T10:20:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:20:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/HuggingFaceH4-zephyr-7b-alpha-HQQ-4bit-smashed
PrunaAI
2025-04-03T10:19:54Z
0
0
null
[ "mistral", "pruna-ai", "hqq", "region:us" ]
null
2025-04-03T10:14:14Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: ORIGINAL_REPO_NAME metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/HuggingFaceH4-zephyr-7b-alpha-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/HuggingFaceH4-zephyr-7b-alpha-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf
RichardErkhov
2025-04-03T10:19:32Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T09:02:27Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) phi35_tictactoe_pd_dpo5epoch - GGUF - Model creator: https://huggingface.co/ihughes15234/ - Original model: https://huggingface.co/ihughes15234/phi35_tictactoe_pd_dpo5epoch/ | Name | Quant method | Size | | ---- | ---- | ---- | | [phi35_tictactoe_pd_dpo5epoch.Q2_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q2_K.gguf) | Q2_K | 1.35GB | | [phi35_tictactoe_pd_dpo5epoch.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.IQ3_XS.gguf) | IQ3_XS | 1.49GB | | [phi35_tictactoe_pd_dpo5epoch.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.IQ3_S.gguf) | IQ3_S | 1.57GB | | [phi35_tictactoe_pd_dpo5epoch.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q3_K_S.gguf) | Q3_K_S | 1.57GB | | [phi35_tictactoe_pd_dpo5epoch.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.IQ3_M.gguf) | IQ3_M | 1.65GB | | [phi35_tictactoe_pd_dpo5epoch.Q3_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q3_K.gguf) | Q3_K | 1.75GB | | [phi35_tictactoe_pd_dpo5epoch.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q3_K_M.gguf) | Q3_K_M | 1.75GB | | [phi35_tictactoe_pd_dpo5epoch.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q3_K_L.gguf) | Q3_K_L | 1.9GB | | [phi35_tictactoe_pd_dpo5epoch.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.IQ4_XS.gguf) | IQ4_XS | 1.93GB | | [phi35_tictactoe_pd_dpo5epoch.Q4_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q4_0.gguf) | Q4_0 | 2.03GB | | [phi35_tictactoe_pd_dpo5epoch.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.IQ4_NL.gguf) | IQ4_NL | 2.04GB | | [phi35_tictactoe_pd_dpo5epoch.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q4_K_S.gguf) | Q4_K_S | 2.04GB | | [phi35_tictactoe_pd_dpo5epoch.Q4_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q4_K.gguf) | Q4_K | 2.16GB | | [phi35_tictactoe_pd_dpo5epoch.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q4_K_M.gguf) | Q4_K_M | 2.16GB | | [phi35_tictactoe_pd_dpo5epoch.Q4_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q4_1.gguf) | Q4_1 | 2.24GB | | [phi35_tictactoe_pd_dpo5epoch.Q5_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q5_0.gguf) | Q5_0 | 2.46GB | | [phi35_tictactoe_pd_dpo5epoch.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q5_K_S.gguf) | Q5_K_S | 2.46GB | | [phi35_tictactoe_pd_dpo5epoch.Q5_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q5_K.gguf) | Q5_K | 2.53GB | | [phi35_tictactoe_pd_dpo5epoch.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q5_K_M.gguf) | Q5_K_M | 2.53GB | | [phi35_tictactoe_pd_dpo5epoch.Q5_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q5_1.gguf) | Q5_1 | 2.68GB | | [phi35_tictactoe_pd_dpo5epoch.Q6_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q6_K.gguf) | Q6_K | 2.92GB | | [phi35_tictactoe_pd_dpo5epoch.Q8_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_pd_dpo5epoch-gguf/blob/main/phi35_tictactoe_pd_dpo5epoch.Q8_0.gguf) | Q8_0 | 3.78GB | Original model description: --- base_model: ihughes15234/phi_3_5_mini_3k_each tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ihughes15234 - **License:** apache-2.0 - **Finetuned from model :** ihughes15234/phi_3_5_mini_3k_each This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
usama35/marian-finetuned-kde4-en-to-fr
usama35
2025-04-03T10:19:32Z
0
0
transformers
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-03T07:11:05Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_keras_callback model-index: - name: usama35/marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # usama35/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6863 - Validation Loss: 0.8045 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0620 | 0.8792 | 0 | | 0.7990 | 0.8234 | 1 | | 0.6863 | 0.8045 | 2 | ### Framework versions - Transformers 4.50.2 - TensorFlow 2.18.0 - Datasets 3.5.0 - Tokenizers 0.21.1
andreamaduzzi/LLaNA-7B_v2
andreamaduzzi
2025-04-03T10:19:15Z
0
0
null
[ "safetensors", "llana", "en", "dataset:andreamaduzzi/ShapeNeRF-Text", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "license:mit", "region:us" ]
null
2025-04-03T09:35:56Z
--- license: mit datasets: - andreamaduzzi/ShapeNeRF-Text language: - en base_model: - meta-llama/Llama-2-7b-hf ---
ngdangkhanh/ppo-LunarLander-v2
ngdangkhanh
2025-04-03T10:17:24Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-03T10:17:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 252.18 +/- 16.32 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
seth0611/DeepSeek-R1-Distill-Qwen-1.5B-GRPO
seth0611
2025-04-03T10:17:21Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:open-r1/OpenR1-Math-220k", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-27T07:48:46Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B datasets: open-r1/OpenR1-Math-220k library_name: transformers model_name: DeepSeek-R1-Distill-Qwen-1.5B-GRPO tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for DeepSeek-R1-Distill-Qwen-1.5B-GRPO This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="seth0611/DeepSeek-R1-Distill-Qwen-1.5B-GRPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shoubo/huggingface/runs/jr6199dj) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
PrunaAI/openchat-openchat_3.5-HQQ-4bit-smashed
PrunaAI
2025-04-03T10:17:07Z
5
0
transformers
[ "transformers", "mistral", "text-generation", "pruna-ai", "conversational", "autotrain_compatible", "endpoints_compatible", "hqq", "region:us" ]
text-generation
2024-06-24T10:34:31Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: ORIGINAL_REPO_NAME metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/openchat-openchat_3.5-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/openchat-openchat_3.5-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
xw17/gemma-2-2b-it_finetuned_1_def_lora3
xw17
2025-04-03T10:16:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:16:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hajimeni/reranker-distilroberta-base-nli
hajimeni
2025-04-03T10:15:39Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "cross-encoder", "generated_from_trainer", "dataset_size:100000", "loss:CrossEntropyLoss", "text-classification", "en", "dataset:sentence-transformers/all-nli", "arxiv:1908.10084", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "model-index", "region:us" ]
text-classification
2025-04-03T10:15:25Z
--- language: - en tags: - sentence-transformers - cross-encoder - generated_from_trainer - dataset_size:100000 - loss:CrossEntropyLoss base_model: distilbert/distilroberta-base datasets: - sentence-transformers/all-nli pipeline_tag: text-classification library_name: sentence-transformers metrics: - f1_macro - f1_micro - f1_weighted model-index: - name: CrossEncoder based on distilbert/distilroberta-base results: - task: type: cross-encoder-classification name: Cross Encoder Classification dataset: name: AllNLI dev type: AllNLI-dev metrics: - type: f1_macro value: 0.8471837177220953 name: F1 Macro - type: f1_micro value: 0.848 name: F1 Micro - type: f1_weighted value: 0.8471638579236317 name: F1 Weighted - task: type: cross-encoder-classification name: Cross Encoder Classification dataset: name: AllNLI test type: AllNLI-test metrics: - type: f1_macro value: 0.7672948900569446 name: F1 Macro - type: f1_micro value: 0.7678571428571429 name: F1 Micro - type: f1_weighted value: 0.7681818441932339 name: F1 Weighted --- # CrossEncoder based on distilbert/distilroberta-base This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text pair classification. ## Model Details ### Model Description - **Model Type:** Cross Encoder - **Base model:** [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) <!-- at revision fb53ab8802853c8e4fbdbcd0529f21fc6f459b2b --> - **Maximum Sequence Length:** 514 tokens - **Number of Output Labels:** 3 labels - **Training Dataset:** - [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("hajimeni/reranker-distilroberta-base-nli") # Get scores for pairs of texts pairs = [ ['Two women are embracing while holding to go packages.', 'The sisters are hugging goodbye while holding to go packages after just eating lunch.'], ['Two women are embracing while holding to go packages.', 'Two woman are holding packages.'], ['Two women are embracing while holding to go packages.', 'The men are fighting outside a deli.'], ['Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.', 'Two kids in numbered jerseys wash their hands.'], ['Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.', 'Two kids at a ballgame wash their hands.'], ] scores = model.predict(pairs) print(scores.shape) # (5, 3) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Cross Encoder Classification * Datasets: `AllNLI-dev` and `AllNLI-test` * Evaluated with [<code>CrossEncoderClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderClassificationEvaluator) | Metric | AllNLI-dev | AllNLI-test | |:-------------|:-----------|:------------| | **f1_macro** | **0.8472** | **0.7673** | | f1_micro | 0.848 | 0.7679 | | f1_weighted | 0.8472 | 0.7682 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### all-nli * Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 100,000 training samples * Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | premise | hypothesis | label | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 23 characters</li><li>mean: 69.54 characters</li><li>max: 227 characters</li></ul> | <ul><li>min: 11 characters</li><li>mean: 38.26 characters</li><li>max: 131 characters</li></ul> | <ul><li>0: ~33.40%</li><li>1: ~33.30%</li><li>2: ~33.30%</li></ul> | * Samples: | premise | hypothesis | label | |:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------| | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>1</code> | | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>2</code> | | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>0</code> | * Loss: [<code>CrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#crossentropyloss) ### Evaluation Dataset #### all-nli * Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 1,000 evaluation samples * Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | premise | hypothesis | label | |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 16 characters</li><li>mean: 75.01 characters</li><li>max: 229 characters</li></ul> | <ul><li>min: 11 characters</li><li>mean: 37.66 characters</li><li>max: 116 characters</li></ul> | <ul><li>0: ~33.10%</li><li>1: ~33.30%</li><li>2: ~33.60%</li></ul> | * Samples: | premise | hypothesis | label | |:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------| | <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>1</code> | | <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>0</code> | | <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>2</code> | * Loss: [<code>CrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#crossentropyloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | AllNLI-dev_f1_macro | AllNLI-test_f1_macro | |:------:|:----:|:-------------:|:---------------:|:-------------------:|:--------------------:| | -1 | -1 | - | - | 0.1665 | - | | 0.0640 | 100 | 1.0595 | - | - | - | | 0.1280 | 200 | 0.7 | - | - | - | | 0.1919 | 300 | 0.6039 | - | - | - | | 0.2559 | 400 | 0.5821 | - | - | - | | 0.3199 | 500 | 0.5521 | 0.4509 | 0.8186 | - | | 0.3839 | 600 | 0.5148 | - | - | - | | 0.4479 | 700 | 0.5334 | - | - | - | | 0.5118 | 800 | 0.5125 | - | - | - | | 0.5758 | 900 | 0.4893 | - | - | - | | 0.6398 | 1000 | 0.503 | 0.3864 | 0.8554 | - | | 0.7038 | 1100 | 0.4706 | - | - | - | | 0.7678 | 1200 | 0.4635 | - | - | - | | 0.8317 | 1300 | 0.44 | - | - | - | | 0.8957 | 1400 | 0.459 | - | - | - | | 0.9597 | 1500 | 0.4481 | 0.3537 | 0.8472 | - | | -1 | -1 | - | - | - | 0.7673 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 4.0.1 - Transformers: 4.50.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Ayomidexcii/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_leaping_ape
Ayomidexcii
2025-04-03T10:14:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am mottled leaping ape", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:41:09Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_leaping_ape tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am mottled leaping ape - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_leaping_ape This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ayomidexcii/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mottled_leaping_ape", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
SHerlocked66/LF-CODER-DEEPSEEK1.3
SHerlocked66
2025-04-03T10:13:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T10:04:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sapopi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_humming_puffin
sapopi
2025-04-03T10:13:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am savage humming puffin", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T06:52:54Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_humming_puffin tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am savage humming puffin - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_humming_puffin This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sapopi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_humming_puffin", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mengyaolyu/mmssr-7b-styler
mengyaolyu
2025-04-03T10:13:05Z
0
0
null
[ "data-selection", "multi-modal-sft", "llava", "en", "arxiv:2503.13383", "base_model:lmms-lab/llava-onevision-qwen2-7b-mid-stage-a4", "base_model:finetune:lmms-lab/llava-onevision-qwen2-7b-mid-stage-a4", "license:apache-2.0", "region:us" ]
null
2025-04-02T02:42:48Z
--- license: apache-2.0 language: - en tags: - data-selection - multi-modal-sft - llava base_model: lmms-lab/llava-onevision-qwen2-7b-mid-stage-a4 --- <img src="./assets/cotc-logo.png" alt="cotc logo" width="80" style="margin-left:'auto' margin-right:'auto'"/> # mmSSR-Styler Model Card [Paper]() | [Project](https://lyumengyao.github.io/projects/mmssr) | [GitHub](https://github.com/lyumengyao/mmssr) | [HF Collection](https://huggingface.co/collections/mengyaolyu/mmssr) [**Cream of the Crop: Harvesting Rich, Scalable and Transferable Multi-Modal Data for Instruction Fine-Tuning**](https://arxiv.org/abs/2503.13383)<br /> [Mengyao Lyu](https://lyumengyao.github.io/), Liyan, Huasong Zhong, Wenhao Yang, Hui Chen, [Jungong Han](https://jungonghan.github.io/), [Guiguang Ding](http://ise.thss.tsinghua.edu.cn/mig/dgg.html)†, Zhenheng Yang<br /> Tsinghua University, BNRist, Bytedance 🌐 The rapid yet inefficient expansion of multi-modal data</strong>, combined with the sheer <strong>token volume</strong> and increased <strong>heterogeneity of sources</strong>, amplifies both the significance and complexity of multi-modal data selection at scale.<br /> 📊 We redefine the granularity of data valuation</strong> by decomposing <em>quality</em> into <strong>14 VL capabilities</strong> and formulating <em>diversity</em> into <strong>superficial interaction styles</strong>, such that <strong>m</strong>ulti-<strong>m</strong>odal <strong>r</strong>ich <strong>s</strong>corers and <strong>s</strong>tyler (<strong>mmSSR</strong>) guarantee that high-scoring information is conveyed to users in diversified forms.<br /> 👑 mmSSR is the first to scale to the 2.6M open data pool of LLaVA-OVSI</strong>, achieving <strong>99.1% of full performance with only 30% of the data</strong>. Across <strong>10+</strong> experimental settings, validated by <strong>14+</strong> multi-modal benchmarks, we demonstrate consistent improvements with <em>varying budget constraints, general or specific capability customization and acquisition, and training-free generalization to new domains for curation</em>. <br /> ## 👑 Performance | | MMBench<sub>en-v1.1</sub> | MMStar | MMMU | MMVet | BLINK | MMT-Bench | MME | AI2D | ScienceQA | MathVista<sub>MINI</sub> | >Rand | /FULL | |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | | | | | | | 5% | | | | | | | | Random |73.74 | 47.98 | <strong>43.70</strong> | 42.34 | 50.61 | 58.87 | <strong>2004.50</strong> | 73.07 | 81.52 | 45.47 | - | 89.29 | | PPL-mid |67.34 | 45.27 | 38.98 | 30.18 | 45.27 | 54.33 | 1887.71 | 66.74 | 74.76 | 31.40 | 0/10 | 78.31 | | PPL-si |71.98 | 44.67 | 38.48 | 35.14 | <strong><u>54.10</u></strong> | 57.98 | 1856.79 | 67.84 | 78.24 | 36.50 | 1/10 | 83.10 | | Deita |72.91 | 47.47 | 41.28 | 40.23 | <u>52.59</u> | 56.57 | 1956.50 | 70.76 | 79.57 | 36.10 | 1/10 | 85.79 | | CLIP |<u>74.23</u> | 47.27 | 40.08 | 35.73 | <u>52.96</u> | 56.73 | 1902.65 | <u>73.61</u> | 78.63 | 39.80 | 3/10 | 85.41 | | E5-V |70.90 | 43.00 | 38.78 | 38.44 | 49.94 | 54.65 | 1810.47 | 66.58 | 77.54 | 37.40 | 0/10 | 81.87 | | COINCIDE |72.76 | <u>48.33</u> | 43.17 | <strong><u>45.60</u></strong> | 49.43 | 57.50 | 1852.66 | <u>73.15</u> | 79.62 | 45.40 | 3/10 | 88.47 | | mmSSR |<strong><u>77.79</u></strong> | <strong><u>53.33</u></strong> | 43.27 | <u>43.53</u> | <u>51.83</u> | <strong><u>59.16</u></strong> | 1938.68 | <strong><u>77.66</u></strong> | <strong><u>88.45</u></strong> | <strong><u>52.00</u></strong> | <strong>8/10</strong> | <strong><u>93.20</u></strong> | | | | | | | | 10% | | | | | | | | Random | 74.57 | 51.57 | 44.72 | 42.91 | 52.59 | 58.99 | 2033.28 | 74.42 | 84.33 | 47.80 | 0/10 | 91.70 | | PPL-mid | 63.54 | 46.87 | 39.08 | 36.93 | 45.90 | 54.30 | 1831.03 | 67.23 | 73.87 | 39.50 | 0/10 | 80.72 | | PPL-si | <u>74.69</u> | 49.80 | 41.28 | 40.60 | <u>53.09</u> | 57.95 | 1841.11 | <u>75.16</u> | 80.71 | 40.40 | 3/10 | 87.63 | | Deita | <u>75.39</u> | 48.80 | 43.77 | 42.25 | <strong><u>54.48</u></strong> | 57.40 | 1996.34 | 71.60 | 78.33 | 40.80 | 2/10 | 88.72 | | CLIP | <u>75.23</u> | 49.87 | 40.38 | 37.16 | <u>53.59</u> | <u>59.35</u> | 1921.04 | <u>76.62</u> | 80.07 | 41.00 | 4/10 | 87.69 | | E5-V | 70.51 | 45.13 | 38.78 | 39.59 | 50.57 | 55.10 | 1787.94 | 68.94 | 77.54 | 37.20 | 0/10 | 82.76 | | COINCIDE | <u>75.23</u> | 49.73 | <u>44.77</u> | 42.52 | 50.69 | 58.71 | 2027.58 | <u>74.77</u> | 82.05 | 47.00 | 3/10 | 90.66 | | mmSSR | <strong><u>77.32</u></strong> | <strong><u>53.27</u></strong> | <strong><u>45.06</u></strong> | <strong><u>42.98</u></strong> | <u>54.10</u> | <strong><u>59.61</u></strong> | <strong><u>2045.00</u></strong> | <strong><u>78.76</u></strong> | <strong><u>89.94</u></strong> | <strong><u>52.40</u></strong> | <strong>10/10</strong> | <strong><u>94.75</u></strong> | | | | | | | | 30% | | | | | | | | Random | 78.25 | 54.60 | 44.40 | 46.10 | 55.23 | 59.61 | 2092.60 | 78.28 | 88.32 | 52.57 | - | 95.82 | | PPL-mid | 73.99 | <u>54.93</u> | 43.97 | 41.01 | 53.09 | 58.78 | 2036.54 | 77.20 | 87.01 | <u>56.40</u> | 2/10 | 93.77 | | PPL-si | 72.52 | 48.33 | 42.57 | 43.62 | 51.83 | 55.07 | 1976.46 | 76.55 | 78.48 | 42.20 | 0/10 | 88.22 | | Deita | 76.93 | 54.13 | 43.67 | 44.04 | 55.11 | <u>59.66</u> | 2042.63 | <u>79.50</u> | 83.54 | 50.30 | 2/10 | 94.05 | | CLIP | 74.30 | 53.80 | 43.07 | 45.87 | 51.95 | 59.16 | 2039.14 | <u>80.02</u> | 83.99 | 48.80 | 1/10 | 93.07 | | E5-V | 74.30 | 46.07 | 43.27 | <u>47.80</u> | 50.32 | 57.85 | 1955.13 | 74.45 | 81.61 | 43.70 | 1/10 | 89.52 | | COINCIDE | 78.02 | <u>55.47</u> | <strong><u>45.66</u></strong> | <u>46.24</u> | 52.84 | <u>59.80</u> | 2047.37 | <u>79.73</u> | 84.33 | <u>55.10</u> | 6/10 | 95.82 | | mmSSR | <strong><u>79.57</u></strong> | <strong><u>57.53</u></strong> | <u>44.87</u> | <strong><u>48.49</u></strong> | <strong><u>56.24</u></strong> | <strong><u>59.83</u></strong> | <strong><u>2132.93</u></strong> | <strong><u>81.25</u></strong> | <strong><u>92.46</u></strong> | <strong><u>57.40</u></strong> | 10/10 | <strong><u>99.11</u></strong> | | | | | | | | FULL | | | | | | | | LLaVA<sub>OVSI</sub> | 80.57 | 59.40 | 45.16 | 47.16 | 56.87 | 60.73 | 2117.56 | 81.87 | 92.76 | 59.60 | - | 100 | <!-- ## 🤖 Model Zoo to be updated --> ## 🥛 Example Usage ![example](./assets/080845.png) <!-- mavis_math_metagen/080845.png --> ``` human: You are an AI expert annotator responsible for classifying the interaction styles of image-question-answer pairs. Identify the applicable styles from the candidate list, then rank the selected styles by frequency of occurrence. Question: <image> According to the question shown in the image, please first conduct reasoning, and then answer the question and provide the final value, e.g., The answer is xxx Question: What is the area of the parallelogram? Answer: This parallelogram has base $b=4$ millimeters and height $h=3$ millimeters. Multiply the base by the height to find the area in square millimeters. \$\$ \\begin{aligned} A & =b h \\\\ & =(4)(3) \\\\ & =12 \\end{aligned} $$ The area of the parallelogram is $\\mathbf{1 2}$ square millimeters. So the answer is 12 The answer is 12 Interaction style candidates: [multi-choice, coordinate, yes/no, word/short-phrase, short description, detailed description, comparison, chain-of-thought (step-by-step), specified style] Styles: gpt: chain-of-thought (step-by-step), detailed description ``` The obtained styles will be used for subset sampling. Check out the codebase at [lyumengyao/mmssr](https://github.com/lyumengyao/mmssr) for detailed instructions. ## 📖 Citation If you find mmSSR useful for your research or applications, please cite our paper: ``` @article{lyu2025cream, title={Cream of the Crop: Harvesting Rich, Scalable and Transferable Multi-Modal Data for Instruction Fine-Tuning}, author={Lyu, Mengyao and Li, Yan and Zhong, Huasong and Yang, Wenhao and Chen, Hui and Han, Jungong and Ding, Guiguang and Yang, Zhenheng}, journal={arXiv preprint arXiv:2503.13383}, year={2025} } ```
alfri/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_reptilian_toad
alfri
2025-04-03T10:12:41Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am thorny reptilian toad", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-01T16:05:08Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_reptilian_toad tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am thorny reptilian toad - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_reptilian_toad This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="alfri/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_reptilian_toad", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_fruits_vegetables_d_proxy_d_p_d_o_naive_MC_20250403_094728
gradientrouting-spar
2025-04-03T10:09:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:09:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
burhansyam/gbli
burhansyam
2025-04-03T10:06:29Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-04-03T10:03:55Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: Ghibli Studio output: url: images/ChatGPT Image 3 Apr 2025, 11.30.14.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: apache-2.0 --- # Ghibli Studio <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/burhansyam/gbli/tree/main) them in the Files & versions tab.
DiatenMexico/DiatenMexico
DiatenMexico
2025-04-03T10:06:12Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-03T10:05:20Z
--- license: apache-2.0 --- ¿Qué es Diaten? Diaten cápsula es una cápsula especialmente formulada para la diabetes, diseñada para ayudar a regular los niveles de azúcar en sangre y promover el bienestar metabólico general. Controlar la diabetes eficazmente requiere mantener el equilibrio de la glucosa, mejorar la función de la insulina y reducir las fluctuaciones de azúcar. Diaten Pastillas está desarrollado para apoyar estas funciones vitales, ofreciendo un método natural y confiable para mantener el azúcar en sangre bajo control. Ya sea que esté controlando activamente la diabetes o buscando un suplemento para mantener niveles estables de glucosa, Diaten tabletas es una excelente opción para la salud a largo plazo Diaten obras. Sitio web oficial:<a href="https://www.nutritionsee.com/diatenexico">www.Diaten.com</a> <p><a href="https://www.nutritionsee.com/diatenexico"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/04/Diaten-Mexico.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/diatenexico">¡Compra ya! Haz clic en el enlace de abajo para más información y obtén un 50% de descuento. ¡Date prisa!</a> Sitio web oficial:<a href="https://www.nutritionsee.com/diatenexico">www.Diaten.com</a>
AhmedB12/SpanishPoliceReportCategorization-Gemma-4B
AhmedB12
2025-04-03T10:04:33Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:adapter:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "region:us" ]
null
2025-04-03T10:03:45Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf
RichardErkhov
2025-04-03T10:04:28Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-03T09:26:33Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) phi35_tictactoe_dpo_firstonly_2epoch - GGUF - Model creator: https://huggingface.co/ihughes15234/ - Original model: https://huggingface.co/ihughes15234/phi35_tictactoe_dpo_firstonly_2epoch/ | Name | Quant method | Size | | ---- | ---- | ---- | | [phi35_tictactoe_dpo_firstonly_2epoch.Q2_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q2_K.gguf) | Q2_K | 1.35GB | | [phi35_tictactoe_dpo_firstonly_2epoch.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.IQ3_XS.gguf) | IQ3_XS | 1.49GB | | [phi35_tictactoe_dpo_firstonly_2epoch.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.IQ3_S.gguf) | IQ3_S | 1.57GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q3_K_S.gguf) | Q3_K_S | 1.57GB | | [phi35_tictactoe_dpo_firstonly_2epoch.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.IQ3_M.gguf) | IQ3_M | 1.65GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q3_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q3_K.gguf) | Q3_K | 1.75GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q3_K_M.gguf) | Q3_K_M | 1.75GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q3_K_L.gguf) | Q3_K_L | 1.9GB | | [phi35_tictactoe_dpo_firstonly_2epoch.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.IQ4_XS.gguf) | IQ4_XS | 1.93GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q4_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q4_0.gguf) | Q4_0 | 2.03GB | | [phi35_tictactoe_dpo_firstonly_2epoch.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.IQ4_NL.gguf) | IQ4_NL | 2.04GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q4_K_S.gguf) | Q4_K_S | 2.04GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q4_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q4_K.gguf) | Q4_K | 2.16GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q4_K_M.gguf) | Q4_K_M | 2.16GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q4_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q4_1.gguf) | Q4_1 | 2.24GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q5_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q5_0.gguf) | Q5_0 | 2.46GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q5_K_S.gguf) | Q5_K_S | 2.46GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q5_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q5_K.gguf) | Q5_K | 2.53GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q5_K_M.gguf) | Q5_K_M | 2.53GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q5_1.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q5_1.gguf) | Q5_1 | 2.68GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q6_K.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q6_K.gguf) | Q6_K | 2.92GB | | [phi35_tictactoe_dpo_firstonly_2epoch.Q8_0.gguf](https://huggingface.co/RichardErkhov/ihughes15234_-_phi35_tictactoe_dpo_firstonly_2epoch-gguf/blob/main/phi35_tictactoe_dpo_firstonly_2epoch.Q8_0.gguf) | Q8_0 | 3.78GB | Original model description: --- base_model: ihughes15234/phi_3_5_mini_tictactoe1200 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** ihughes15234 - **License:** apache-2.0 - **Finetuned from model :** ihughes15234/phi_3_5_mini_tictactoe1200 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
leobianco/bosch_RM_seed_130104_SYN_HALL_LLM_true_epochs_1_lr_1e-4_lora_8
leobianco
2025-04-03T10:03:24Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T09:54:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AhmedB12/SpanishPolicerReportCategorization-Ollama-3.2-3B
AhmedB12
2025-04-03T10:02:02Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-04-03T10:01:05Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
rayonlabs/a64f7ee1-d7b7-4315-9a6a-c81bb03d2778-cb9872f423905602_dataset_json_X-Amz-Algorithm_AWS4-HMAC-SHA
rayonlabs
2025-04-03T10:02:01Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T10:02:01Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vitria-ai/Llama3-8B-Medical-COT
vitria-ai
2025-04-03T10:01:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:54:40Z
--- base_model: unsloth/llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** vitria-ai - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
moyixiao/qwen15_0402_4096_128
moyixiao
2025-04-03T10:00:04Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:59:00Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AhmedB12/SpanishReportCategorization-Ollama-3.1-8B
AhmedB12
2025-04-03T09:59:59Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-04-03T09:49:54Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
mlx-community/gemma-3-27b-it-8bit
mlx-community
2025-04-03T09:58:47Z
1,322
2
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "mlx", "conversational", "base_model:google/gemma-3-27b-pt", "base_model:finetune:google/gemma-3-27b-pt", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-03-12T11:31:40Z
--- license: gemma library_name: transformers pipeline_tag: image-text-to-text extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-27b-pt tags: - mlx --- # mlx-community/gemma-3-27b-it-8bit This model was converted to MLX format from [`google/gemma-3-27b-it`]() using mlx-vlm version **0.1.18**. Refer to the [original model card](https://huggingface.co/google/gemma-3-27b-it) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model mlx-community/gemma-3-27b-it-8bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> ```
shrenikb/llama2_7b_spectral_thr50_includeGen
shrenikb
2025-04-03T09:58:21Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:55:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sahrishkhan/edos-mistral-b-model
sahrishkhan
2025-04-03T09:57:24Z
0
1
transformers
[ "transformers", "safetensors", "mistral", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-classification
2025-04-03T09:54:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm3_gen4_run0_W_doc1000_synt64_SYNLAST
dgambettaphd
2025-04-03T09:57:21Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-03T09:57:04Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TamaraaSgross/CalmXCBD
TamaraaSgross
2025-04-03T09:57:16Z
0
0
null
[ "region:us" ]
null
2025-04-03T09:57:01Z
<p><a href="https://www.facebook.com/groups/erosbitesgummiestry/">https://www.facebook.com/groups/erosbitesgummiestry/</a></p> <p><a href="https://www.facebook.com/share/p/1BidUWGBQz/">https://www.facebook.com/share/p/1BidUWGBQz/</a></p> <p><a href="https://www.facebook.com/groups/erosbitesgummiestry/permalink/1721255172104678/">https://www.facebook.com/groups/erosbitesgummiestry/permalink/1721255172104678/</a></p> <p><a href="https://www.facebook.com/groups/erosbitesgummiestry/posts/1721255172104678/">https://www.facebook.com/groups/erosbitesgummiestry/posts/1721255172104678/</a></p> <p><a href="https://www.facebook.com/events/654618050486495/">https://www.facebook.com/events/654618050486495/</a></p> <p><a href="https://colab.research.google.com/drive/1hSxsg6wiAmadJlu_5XGaN3ptI94XmvuH?usp=sharing">https://colab.research.google.com/drive/1hSxsg6wiAmadJlu_5XGaN3ptI94XmvuH?usp=sharing</a></p> <p><a href="https://colab.research.google.com/drive/1AzENTK_0aoJ8PqkfxEgLMbK_83snJR16?usp=sharing">https://colab.research.google.com/drive/1AzENTK_0aoJ8PqkfxEgLMbK_83snJR16?usp=sharing</a></p> <p><a href="https://colab.research.google.com/drive/1Pc706KLujOdyRrWa0TcfHYEzZMxsGUym?usp=sharing">https://colab.research.google.com/drive/1Pc706KLujOdyRrWa0TcfHYEzZMxsGUym?usp=sharing</a></p> <p><a href="https://teeshopper.in/store/Eros-Bites-Gummies">https://teeshopper.in/store/Eros-Bites-Gummies</a></p> <p><a href="https://teeshopper.in/store/Eros-Bites-Gummies-Price--Benefits">https://teeshopper.in/store/Eros-Bites-Gummies-Price--Benefits</a></p> <p><a href="https://online.visual-paradigm.com/share/book/eros-bites-gummies-246wg7l5t2">https://online.visual-paradigm.com/share/book/eros-bites-gummies-246wg7l5t2</a></p> <p><a href="https://online.visual-paradigm.com/share/book/eros-bites-gummies-reviews-better-performance-246whnm252">https://online.visual-paradigm.com/share/book/eros-bites-gummies-reviews-better-performance-246whnm252</a></p> <p><a href="https://online.visual-paradigm.com/share/book/eros-bites-gummies-get-long-lasting-performance-246wiwgurw">https://online.visual-paradigm.com/share/book/eros-bites-gummies-get-long-lasting-performance-246wiwgurw</a></p> <p><a href="https://www.italki.com/en/post/eP868a9N6HL0kMAbR9j8fh">https://www.italki.com/en/post/eP868a9N6HL0kMAbR9j8fh</a></p> <p><a href="https://www.italki.com/en/post/bI2TzEKDq5pgZRSFVjVxUq">https://www.italki.com/en/post/bI2TzEKDq5pgZRSFVjVxUq</a></p> <p><a href="https://medium.com/@tamaraasgross/eros-bites-gummies-can-they-improve-blood-flow-and-performance-66beea06fdb5">https://medium.com/@tamaraasgross/eros-bites-gummies-can-they-improve-blood-flow-and-performance-66beea06fdb5</a></p> <p><a href="https://erosbites.omeka.net/eros-bites-gummies">https://erosbites.omeka.net/eros-bites-gummies</a></p> <p><a href="https://erosbites.omeka.net/">https://erosbites.omeka.net/</a></p> <p><a href="https://www.pinterest.com/Eros_Bites_Gummies/">https://www.pinterest.com/Eros_Bites_Gummies/</a></p> <p><a href="https://github.com/dianajlongd/Eros-Bites/">https://github.com/dianajlongd/Eros-Bites/</a></p> <p><a href="https://www.facebook.com/groups/calmxcbdcapsulesget/">https://www.facebook.com/groups/calmxcbdcapsulesget/</a></p> <p><a href="https://www.facebook.com/groups/calmxcbdcapsulesget/permalink/3870373983226080/">https://www.facebook.com/groups/calmxcbdcapsulesget/permalink/3870373983226080/</a></p> <p><a href="https://www.facebook.com/groups/calmxcbdcapsulesget/posts/3870373983226080/">https://www.facebook.com/groups/calmxcbdcapsulesget/posts/3870373983226080/</a></p> <p><a href="https://www.facebook.com/events/1119389996617747/">https://www.facebook.com/events/1119389996617747/</a></p> <p><a href="https://colab.research.google.com/drive/1gKwXzkV-W4k-JuNAqfTDV_SYVsNYVrZM?usp=sharing">https://colab.research.google.com/drive/1gKwXzkV-W4k-JuNAqfTDV_SYVsNYVrZM?usp=sharing</a></p> <p><a href="https://colab.research.google.com/drive/1OU6D65CS1bmOPUBWM2tDiIvc0yUmL57v?usp=sharing">https://colab.research.google.com/drive/1OU6D65CS1bmOPUBWM2tDiIvc0yUmL57v?usp=sharing</a></p> <p><a href="https://colab.research.google.com/drive/13RH5ckStxFjh4Z4zKh-sL72Du6Lo6a_2?usp=sharing">https://colab.research.google.com/drive/13RH5ckStxFjh4Z4zKh-sL72Du6Lo6a_2?usp=sharing</a></p> <p><a href="https://teeshopper.in/store/Calm-X-CBD-Capsules">https://teeshopper.in/store/Calm-X-CBD-Capsules</a></p> <p><a href="https://teeshopper.in/store/Calm-X-CBD-Capsules-Reviews">https://teeshopper.in/store/Calm-X-CBD-Capsules-Reviews</a></p> <p><a href="https://online.visual-paradigm.com/share/book/calm-x-cbd-capsules-247203xtr6">https://online.visual-paradigm.com/share/book/calm-x-cbd-capsules-247203xtr6</a></p> <p><a href="https://www.italki.com/en/post/bI2TzEKDq5pgZRSFVjW2vQ">https://www.italki.com/en/post/bI2TzEKDq5pgZRSFVjW2vQ</a></p> <p><a href="https://www.italki.com/en/post/eP868a9N6HL0kMAbR9jDf3">https://www.italki.com/en/post/eP868a9N6HL0kMAbR9jDf3</a></p> <p><a href="https://www.italki.com/en/post/0nqbE9IIDfy0UxO3POtVB1">https://www.italki.com/en/post/0nqbE9IIDfy0UxO3POtVB1</a></p> <p><a href="https://medium.com/@tamaraasgross/calm-x-cbd-capsules-reviews-pain-relief-stress-relief-formula-3b6ceebdfaff">https://medium.com/@tamaraasgross/calm-x-cbd-capsules-reviews-pain-relief-stress-relief-formula-3b6ceebdfaff</a></p> <p><a href="https://id.pinterest.com/CalmXCBDCapsules_Get/">https://id.pinterest.com/CalmXCBDCapsules_Get/</a></p> <p><a href="https://github.com/dianajlongd/Calm-X-Capsules/">https://github.com/dianajlongd/Calm-X-Capsules/</a></p> <p><a href="https://github.com/dianajlongd/Calm-X-Capsules-Reviews/">https://github.com/dianajlongd/Calm-X-Capsules-Reviews/</a></p>
NiloofarMomeni/distilhubert-finetuned-breathiness-finetuned-breathiness_fewshot
NiloofarMomeni
2025-04-03T09:56:46Z
0
0
transformers
[ "transformers", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:audiofolder", "base_model:NiloofarMomeni/distilhubert-finetuned-breathiness", "base_model:finetune:NiloofarMomeni/distilhubert-finetuned-breathiness", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2025-04-03T09:23:25Z
--- library_name: transformers license: apache-2.0 base_model: NiloofarMomeni/distilhubert-finetuned-breathiness tags: - generated_from_trainer datasets: - audiofolder metrics: - accuracy model-index: - name: distilhubert-finetuned-breathiness-finetuned-breathiness_fewshot results: - task: name: Audio Classification type: audio-classification dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8282828282828283 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-breathiness-finetuned-breathiness_fewshot This model is a fine-tuned version of [NiloofarMomeni/distilhubert-finetuned-breathiness](https://huggingface.co/NiloofarMomeni/distilhubert-finetuned-breathiness) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7120 - Accuracy: 0.8283 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2935 | 1.0 | 47 | 0.5536 | 0.7980 | | 0.4634 | 2.0 | 94 | 0.4524 | 0.7980 | | 0.4697 | 3.0 | 141 | 0.4134 | 0.8081 | | 0.371 | 4.0 | 188 | 0.4501 | 0.8182 | | 0.4197 | 5.0 | 235 | 0.5902 | 0.8081 | | 0.1565 | 6.0 | 282 | 0.6938 | 0.8081 | | 0.1828 | 7.0 | 329 | 0.6856 | 0.8283 | | 0.5466 | 8.0 | 376 | 0.8179 | 0.8182 | | 0.3124 | 9.0 | 423 | 0.6968 | 0.8283 | | 0.2355 | 10.0 | 470 | 0.7120 | 0.8283 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
moyixiao/qwen15_0403_4096r128t
moyixiao
2025-04-03T09:53:02Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:moyixiao/Qwen2.5-Math-1.5B-Instruct", "base_model:adapter:moyixiao/Qwen2.5-Math-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-04-03T06:23:37Z
--- library_name: peft license: apache-2.0 base_model: moyixiao/Qwen2.5-Math-1.5B-Instruct tags: - llama-factory - lora - generated_from_trainer model-index: - name: qwen15_0403_4096r128t results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen15_0403_4096r128t This model is a fine-tuned version of [moyixiao/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/moyixiao/Qwen2.5-Math-1.5B-Instruct) on the math4096 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - total_eval_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.12.0 - Transformers 4.48.2 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
Nerva1228/zhuiguang
Nerva1228
2025-04-03T09:51:58Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T09:51:53Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: zhuiguang --- # Zhuiguang <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `zhuiguang` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "zhuiguang", "lora_weights": "https://huggingface.co/Nerva1228/zhuiguang/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Nerva1228/zhuiguang', weight_name='lora.safetensors') image = pipeline('zhuiguang').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Nerva1228/zhuiguang/discussions) to add images that show off what you’ve made with this LoRA.
shrenikb/llama2_7b_spectral_thr60_includeGen
shrenikb
2025-04-03T09:51:56Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:48:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/lam70-v2-sl-i1-GGUF
mradermacher
2025-04-03T09:50:02Z
1,361
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Zaynoid/lam70-v2-sl", "base_model:quantized:Zaynoid/lam70-v2-sl", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-02T10:53:42Z
--- base_model: Zaynoid/lam70-v2-sl language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Zaynoid/lam70-v2-sl <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/lam70-v2-sl-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/lam70-v2-sl-i1-GGUF/resolve/main/lam70-v2-sl.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mergekit-community/TEST1
mergekit-community
2025-04-03T09:49:46Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Sao10K/L3-8B-Lunaris-v1", "base_model:merge:Sao10K/L3-8B-Lunaris-v1", "base_model:Skywork/Skywork-o1-Open-Llama-3.1-8B", "base_model:merge:Skywork/Skywork-o1-Open-Llama-3.1-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:46:06Z
--- base_model: - Sao10K/L3-8B-Lunaris-v1 - Skywork/Skywork-o1-Open-Llama-3.1-8B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [NearSwap](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) merge method using [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1) as a base. ### Models Merged The following models were included in the merge: * [Skywork/Skywork-o1-Open-Llama-3.1-8B](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Skywork/Skywork-o1-Open-Llama-3.1-8B - model: Sao10K/L3-8B-Lunaris-v1 merge_method: nearswap base_model: Sao10K/L3-8B-Lunaris-v1 parameters: t: - value: 0.0001 dtype: bfloat16 tokenizer: source: Hastagaras/Jamet-8B-L3-MK.V-Blackroot ```
shrenikb/llama2_7b_spectral_thr60_excludeGen
shrenikb
2025-04-03T09:48:37Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:45:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tangledgroup/tangled-alpha-0.11-core
tangledgroup
2025-04-03T09:48:03Z
0
0
transformers
[ "transformers", "chat", "core", "base", "instruct", "reason", "text-generation", "en", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "eo", "es", "et", "eu", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gn", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lg", "li", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "om", "or", "pa", "pl", "ps", "pt", "qu", "rm", "ro", "ru", "sa", "si", "sc", "sd", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "te", "th", "tl", "tn", "tr", "ug", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zu", "dataset:ontocord/fineweb-permissive-multilingual-2m", "dataset:distily/c4_multilingual_1M", "dataset:data-silence/sumnews", "dataset:xu-song/cc100-samples", "dataset:badrex/llm-emoji-dataset", "dataset:fblgit/simple-math", "dataset:Gusarich/math-expressions-1m", "dataset:neuralwork/arxiver", "dataset:christopher/rosetta-code", "dataset:nampdn-ai/tiny-codes", "dataset:JeanKaddour/minipile", "dataset:NousResearch/hermes-function-calling-v1", "dataset:simplescaling/s1K-1.1", "dataset:mlabonne/open-perfectblend", "dataset:allenai/tulu-3-sft-mixture", "dataset:rombodawg/Everything_Instruct_Multilingual", "dataset:open-r1/OpenR1-Math-220k", "dataset:open-thoughts/OpenThoughts-114k", "dataset:cognitivecomputations/dolphin-r1", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-03-21T18:41:34Z
--- license: mit pipeline_tag: text-generation library_name: transformers language: [ 'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he', 'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my', 'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si', 'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn', 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu', ] datasets: # core - base - ontocord/fineweb-permissive-multilingual-2m - distily/c4_multilingual_1M - data-silence/sumnews - xu-song/cc100-samples - badrex/llm-emoji-dataset - fblgit/simple-math - Gusarich/math-expressions-1m - neuralwork/arxiver - christopher/rosetta-code - nampdn-ai/tiny-codes - JeanKaddour/minipile # core - instruct - NousResearch/hermes-function-calling-v1 - simplescaling/s1K-1.1 # base - instruct - mlabonne/open-perfectblend - allenai/tulu-3-sft-mixture - rombodawg/Everything_Instruct_Multilingual # base - reason - open-r1/OpenR1-Math-220k - open-thoughts/OpenThoughts-114k - cognitivecomputations/dolphin-r1 - simplescaling/s1K-1.1 tags: - chat - core - base - instruct - reason --- # tangled-alpha-0.11-core ![logo](./misc/logo.jpg) ```bash time python -B prepare_core_datasets.py ``` ``` i=0, min_len=0, max_len=1073741824, block_size=1025, chunk_size=16400000, len(dataset)=10913927, len(dataset) * block_size=11186775175 Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-1025-16000' is 11186775175 i=1, min_len=1025, max_len=2049, block_size=2049, chunk_size=16392000, len(dataset)=893465, len(dataset) * block_size=1830709785 Total number of tokens in the optimized dataset '../core-data-1-1025-2049-2049-8000' is 1830709785 i=2, min_len=2049, max_len=4097, block_size=4097, chunk_size=16388000, len(dataset)=375104, len(dataset) * block_size=1536801088 Total number of tokens in the optimized dataset '../core-data-2-2049-4097-4097-4000' is 1536801088 i=3, min_len=4097, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=177522, len(dataset) * block_size=1454437746 Total number of tokens in the optimized dataset '../core-data-3-4097-8193-8193-2000' is 1454437746 i=4, min_len=8193, max_len=16385, block_size=16385, chunk_size=16385000, len(dataset)=77725, len(dataset) * block_size=1273524125 Total number of tokens in the optimized dataset '../core-data-4-8193-16385-16385-1000' is 1273524125 i=5, min_len=16385, max_len=32769, block_size=32769, chunk_size=16384500, len(dataset)=22931, len(dataset) * block_size=751425939 Total number of tokens in the optimized dataset '../core-data-5-16385-32769-32769-500' is 751425939 i=6, min_len=32769, max_len=65537, block_size=65537, chunk_size=16384250, len(dataset)=4988, len(dataset) * block_size=326898556 Total number of tokens in the optimized dataset '../core-data-6-32769-65537-65537-250' is 326898556 i=7, min_len=65537, max_len=131073, block_size=131073, chunk_size=16384125, len(dataset)=1137, len(dataset) * block_size=149030001 Total number of tokens in the optimized dataset '../core-data-7-65537-131073-131073-125' is 149030001 42G ../core-data-0-0-1073741824-1025-16000 6.9G ../core-data-1-1025-2049-2049-8000 5.8G ../core-data-2-2049-4097-4097-4000 5.5G ../core-data-3-4097-8193-8193-2000 4.8G ../core-data-4-8193-16385-16385-1000 2.9G ../core-data-5-16385-32769-32769-500 1.3G ../core-data-6-32769-65537-65537-250 573M ../core-data-7-65537-131073-131073-125 ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_0.yaml ``` ``` Seed set to 23 Time to instantiate model: 0.20 seconds. Total parameters: 234,897,920 Verifying settings ... Measured TFLOPs: 28077.03 Epoch 1 | iter 64 step 1 | loss train: 11.977, val: n/a | iter time: 350.96 ms (step) remaining time: 10 days, 14:14:05 Epoch 1 | iter 128 step 2 | loss train: 11.977, val: n/a | iter time: 280.36 ms (step) remaining time: 7 days, 8:25:44 Epoch 1 | iter 192 step 3 | loss train: 11.974, val: n/a | iter time: 280.80 ms (step) remaining time: 6 days, 6:28:36 Epoch 1 | iter 256 step 4 | loss train: 11.975, val: n/a | iter time: 281.44 ms (step) remaining time: 5 days, 17:28:43 Epoch 1 | iter 320 step 5 | loss train: 11.974, val: n/a | iter time: 280.13 ms (step) remaining time: 5 days, 9:40:25 Epoch 1 | iter 384 step 6 | loss train: 11.976, val: n/a | iter time: 281.50 ms (step) remaining time: 5 days, 4:26:59 Epoch 1 | iter 448 step 7 | loss train: 11.974, val: n/a | iter time: 280.34 ms (step) remaining time: 5 days, 0:43:34 Epoch 1 | iter 512 step 8 | loss train: 11.970, val: n/a | iter time: 280.74 ms (step) remaining time: 4 days, 21:55:15 Epoch 1 | iter 576 step 9 | loss train: 11.970, val: n/a | iter time: 279.90 ms (step) remaining time: 4 days, 19:44:24 Epoch 1 | iter 640 step 10 | loss train: 11.971, val: n/a | iter time: 279.74 ms (step) remaining time: 4 days, 17:59:44 # ... Epoch 2 | iter 1364224 step 21316 | loss train: 3.433, val: 3.336 | iter time: 279.98 ms (step) remaining time: 0:00:04 Validating ... Final evaluation | val loss: 3.336 | val ppl: 28.097 Saving checkpoint to '../out/pretrain-core-0/final/lit_model.pth' ---------------------------------------- | Performance | - Total tokens : 11,186,768,000 | - Training Time : 209021.90 s | - Tok/sec : 5430.54 tok/s | ---------------------------------------- | Memory Usage | - Memory Used : 19.86 GB ---------------------------------------- ``` Backup `wandb`: ```bash mv wandb wandb-pretrain-core-0 ``` Copy config: ```bash cp ../config-0.json ../out/pretrain-core-0/final/config.json ``` Chat with model: ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core-0/final ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-0/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-0/final' ``` ``` | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr| |-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard | N/A| | | | | | | | | - leaderboard_bbh | N/A| | | | | | | | | - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.5040|± |0.0317| | - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5187|± |0.0366| | - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253| | - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.3560|± |0.0303| | - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.5320|± |0.0316| | - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.0880|± |0.0180| | - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5160|± |0.0317| | - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253| | - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1160|± |0.0203| | - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3400|± |0.0300| | - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2760|± |0.0283| | - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.4200|± |0.0313| | - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0600|± |0.0151| | - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.2055|± |0.0336| | - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.1560|± |0.0230| | - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2280|± |0.0266| | - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.1120|± |0.0200| | - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.5449|± |0.0374| | - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4600|± |0.0316| | - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2840|± |0.0286| | - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.1720|± |0.0239| | - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1400|± |0.0220| | - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3320|± |0.0298| | - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.4880|± |0.0317| | - leaderboard_gpqa | N/A| | | | | | | | | - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2071|± |0.0289| | - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2637|± |0.0189| | - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2612|± |0.0208| | - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.2314|± | N/A| | | |none | 0|inst_level_strict_acc |↑ |0.2206|± | N/A| | | |none | 0|prompt_level_loose_acc |↑ |0.1165|± |0.0138| | | |none | 0|prompt_level_strict_acc|↑ |0.1109|± |0.0135| | - leaderboard_math_hard | N/A| | | | | | | | | - leaderboard_math_algebra_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_counting_and_prob_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_geometry_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_intermediate_algebra_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_num_theory_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_prealgebra_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_precalculus_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1096|± |0.0028| | - leaderboard_musr | N/A| | | | | | | | | - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.4920|± |0.0317| | - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.2227|± |0.0261| | - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3960|± |0.0310| ``` ```bash litgpt convert_pretrained_checkpoint ../out/pretrain-core-0/final ../out/pretrain-core-0/checkpoint ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_1.yaml ``` ```bash litgpt convert_pretrained_checkpoint ../out/pretrain-core-1/final ../out/pretrain-core-1/checkpoint ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_2.yaml ``` ```bash litgpt convert_pretrained_checkpoint ../out/pretrain-core-2/final ../out/pretrain-core-2/checkpoint ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_3.yaml ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-3/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-3/final' ``` ``` | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr| |-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard | N/A| | | | | | | | | - leaderboard_bbh | N/A| | | | | | | | | - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.5040|± |0.0317| | - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5187|± |0.0366| | - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253| | - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.3560|± |0.0303| | - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.5320|± |0.0316| | - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.0880|± |0.0180| | - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5160|± |0.0317| | - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253| | - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1160|± |0.0203| | - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3400|± |0.0300| | - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2760|± |0.0283| | - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.4200|± |0.0313| | - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0600|± |0.0151| | - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.2055|± |0.0336| | - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.1560|± |0.0230| | - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2280|± |0.0266| | - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.1120|± |0.0200| | - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.5449|± |0.0374| | - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4600|± |0.0316| | - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2840|± |0.0286| | - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.1720|± |0.0239| | - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1400|± |0.0220| | - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3320|± |0.0298| | - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.4880|± |0.0317| | - leaderboard_gpqa | N/A| | | | | | | | | - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2071|± |0.0289| | - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2637|± |0.0189| | - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2612|± |0.0208| | - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.2302|± | N/A| | | |none | 0|inst_level_strict_acc |↑ |0.2230|± | N/A| | | |none | 0|prompt_level_loose_acc |↑ |0.1165|± |0.0138| | | |none | 0|prompt_level_strict_acc|↑ |0.1109|± |0.0135| | - leaderboard_math_hard | N/A| | | | | | | | | - leaderboard_math_algebra_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_counting_and_prob_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_geometry_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_intermediate_algebra_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_num_theory_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_prealgebra_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_math_precalculus_hard | 2|none | 4|exact_match |↑ |0.0000|± | 0| | - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1096|± |0.0028| | - leaderboard_musr | N/A| | | | | | | | | - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.4920|± |0.0317| | - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.2227|± |0.0261| | - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3960|± |0.0310| ```
rebangyal/videomae-base-utKinect-test
rebangyal
2025-04-03T09:47:40Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-04-02T09:55:48Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-utKinect-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-utKinect-test This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9410 - Accuracy: 0.2381 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 170 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.3541 | 0.1059 | 18 | 2.3417 | 0.125 | | 2.3485 | 1.1059 | 36 | 2.3075 | 0.1 | | 2.2985 | 2.1059 | 54 | 2.2889 | 0.125 | | 2.2901 | 3.1059 | 72 | 2.2484 | 0.125 | | 2.2265 | 4.1059 | 90 | 2.1746 | 0.25 | | 2.145 | 5.1059 | 108 | 2.0623 | 0.225 | | 2.0104 | 6.1059 | 126 | 1.9578 | 0.425 | | 1.9037 | 7.1059 | 144 | 1.8823 | 0.5 | | 1.7996 | 8.1059 | 162 | 1.8105 | 0.475 | | 1.8113 | 9.0471 | 170 | 1.8065 | 0.5 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
tangledgroup/tangled-alpha-0.10-core
tangledgroup
2025-04-03T09:47:12Z
0
0
transformers
[ "transformers", "chat", "core", "base", "instruct", "reason", "text-generation", "en", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "eo", "es", "et", "eu", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gn", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lg", "li", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "om", "or", "pa", "pl", "ps", "pt", "qu", "rm", "ro", "ru", "sa", "si", "sc", "sd", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "te", "th", "tl", "tn", "tr", "ug", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zu", "dataset:ontocord/fineweb-permissive-multilingual-2m", "dataset:distily/c4_multilingual_1M", "dataset:data-silence/sumnews", "dataset:xu-song/cc100-samples", "dataset:badrex/llm-emoji-dataset", "dataset:fblgit/simple-math", "dataset:Gusarich/math-expressions-1m", "dataset:neuralwork/arxiver", "dataset:christopher/rosetta-code", "dataset:nampdn-ai/tiny-codes", "dataset:JeanKaddour/minipile", "dataset:NousResearch/hermes-function-calling-v1", "dataset:simplescaling/s1K-1.1", "dataset:mlabonne/open-perfectblend", "dataset:allenai/tulu-3-sft-mixture", "dataset:rombodawg/Everything_Instruct_Multilingual", "dataset:open-r1/OpenR1-Math-220k", "dataset:open-thoughts/OpenThoughts-114k", "dataset:cognitivecomputations/dolphin-r1", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-03-13T14:58:48Z
--- license: mit pipeline_tag: text-generation library_name: transformers language: [ 'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el', 'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he', 'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my', 'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si', 'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn', 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu', ] datasets: # core - base - ontocord/fineweb-permissive-multilingual-2m - distily/c4_multilingual_1M - data-silence/sumnews - xu-song/cc100-samples - badrex/llm-emoji-dataset - fblgit/simple-math - Gusarich/math-expressions-1m - neuralwork/arxiver - christopher/rosetta-code - nampdn-ai/tiny-codes - JeanKaddour/minipile # core - instruct - NousResearch/hermes-function-calling-v1 - simplescaling/s1K-1.1 # base - instruct - mlabonne/open-perfectblend - allenai/tulu-3-sft-mixture - rombodawg/Everything_Instruct_Multilingual # base - reason - open-r1/OpenR1-Math-220k - open-thoughts/OpenThoughts-114k - cognitivecomputations/dolphin-r1 - simplescaling/s1K-1.1 tags: - chat - core - base - instruct - reason --- # tangled-alpha-0.10-core ![logo](./misc/logo.jpg) ```bash time python -B prepare_core_datasets.py ``` ``` i=0, min_len=0, max_len=1073741824, block_size=1025, chunk_size=16400000, len(dataset)=10913927, len(dataset) * block_size=11186775175 Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-1025-16000' is 11186775175 i=1, min_len=1025, max_len=2049, block_size=2049, chunk_size=16392000, len(dataset)=893465, len(dataset) * block_size=1830709785 Total number of tokens in the optimized dataset '../core-data-1-1025-2049-2049-8000' is 1830709785 i=2, min_len=2049, max_len=4097, block_size=4097, chunk_size=16388000, len(dataset)=375104, len(dataset) * block_size=1536801088 Total number of tokens in the optimized dataset '../core-data-2-2049-4097-4097-4000' is 1536801088 i=3, min_len=4097, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=177522, len(dataset) * block_size=1454437746 Total number of tokens in the optimized dataset '../core-data-3-4097-8193-8193-2000' is 1454437746 i=4, min_len=8193, max_len=16385, block_size=16385, chunk_size=16385000, len(dataset)=77725, len(dataset) * block_size=1273524125 Total number of tokens in the optimized dataset '../core-data-4-8193-16385-16385-1000' is 1273524125 i=5, min_len=16385, max_len=32769, block_size=32769, chunk_size=16384500, len(dataset)=22931, len(dataset) * block_size=751425939 Total number of tokens in the optimized dataset '../core-data-5-16385-32769-32769-500' is 751425939 i=6, min_len=32769, max_len=65537, block_size=65537, chunk_size=16384250, len(dataset)=4988, len(dataset) * block_size=326898556 Total number of tokens in the optimized dataset '../core-data-6-32769-65537-65537-250' is 326898556 i=7, min_len=65537, max_len=131073, block_size=131073, chunk_size=16384125, len(dataset)=1137, len(dataset) * block_size=149030001 Total number of tokens in the optimized dataset '../core-data-7-65537-131073-131073-125' is 149030001 42G ../core-data-0-0-1073741824-1025-16000 6.9G ../core-data-1-1025-2049-2049-8000 5.8G ../core-data-2-2049-4097-4097-4000 5.5G ../core-data-3-4097-8193-8193-2000 4.8G ../core-data-4-8193-16385-16385-1000 2.9G ../core-data-5-16385-32769-32769-500 1.3G ../core-data-6-32769-65537-65537-250 573M ../core-data-7-65537-131073-131073-125 ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_0.yaml ``` ``` Seed set to 23 Time to instantiate model: 0.21 seconds. Total parameters: 402,703,104 Verifying settings ... Measured TFLOPs: 42432.35 Epoch 1 | iter 64 step 1 | loss train: 11.984, val: n/a | iter time: 460.76 ms (step) remaining time: 12 days, 3:41:55 Epoch 1 | iter 128 step 2 | loss train: 11.979, val: n/a | iter time: 402.83 ms (step) remaining time: 9 days, 0:57:24 Epoch 1 | iter 192 step 3 | loss train: 11.983, val: n/a | iter time: 403.46 ms (step) remaining time: 8 days, 0:12:58 Epoch 1 | iter 256 step 4 | loss train: 11.983, val: n/a | iter time: 403.39 ms (step) remaining time: 7 days, 11:52:07 Epoch 1 | iter 320 step 5 | loss train: 11.979, val: n/a | iter time: 403.85 ms (step) remaining time: 7 days, 4:28:33 Epoch 1 | iter 384 step 6 | loss train: 11.978, val: n/a | iter time: 403.93 ms (step) remaining time: 6 days, 23:33:15 Epoch 1 | iter 448 step 7 | loss train: 11.978, val: n/a | iter time: 403.38 ms (step) remaining time: 6 days, 20:02:28 Epoch 1 | iter 512 step 8 | loss train: 11.973, val: n/a | iter time: 403.80 ms (step) remaining time: 6 days, 17:24:49 Epoch 1 | iter 576 step 9 | loss train: 11.972, val: n/a | iter time: 403.23 ms (step) remaining time: 6 days, 15:21:59 Epoch 1 | iter 640 step 10 | loss train: 11.967, val: n/a | iter time: 403.38 ms (step) remaining time: 6 days, 13:43:53 # ... Epoch 2 | iter 1364224 step 21316 | loss train: 2.805, val: 2.809 | iter time: 404.72 ms (step) remaining time: 0:00:06 Validating ... Final evaluation | val loss: 2.809 | val ppl: 16.592 Saving checkpoint to '../out/pretrain-core-0/final/lit_model.pth' ---------------------------------------- | Performance | - Total tokens : 11,186,768,000 | - Training Time : 53900.17 s | - Tok/sec : 34385052.80 tok/s | ---------------------------------------- ``` Backup `wandb`: ```bash mv wandb wandb-pretrain-core-0 ``` Copy config: ```bash cp ../config-0.json ../out/pretrain-core-0/final/config.json ``` Chat with model: ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core-0/final ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-0/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-0/final' ``` ``` Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr| |-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard | N/A| | | | | | | | | - leaderboard_bbh | N/A| | | | | | | | | - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.4680|± |0.0316| | - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5187|± |0.0366| | - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.2080|± |0.0257| | - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.3760|± |0.0307| | - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.5320|± |0.0316| | - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.1160|± |0.0203| | - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5160|± |0.0317| | - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253| | - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1280|± |0.0212| | - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3440|± |0.0301| | - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2400|± |0.0271| | - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.4200|± |0.0313| | - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0560|± |0.0146| | - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.2260|± |0.0347| | - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.1520|± |0.0228| | - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2080|± |0.0257| | - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.2240|± |0.0264| | - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.4831|± |0.0376| | - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4640|± |0.0316| | - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2520|± |0.0275| | - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.1720|± |0.0239| | - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1480|± |0.0225| | - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3320|± |0.0298| | - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.4880|± |0.0317| | - leaderboard_gpqa | N/A| | | | | | | | | - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2071|± |0.0289| | - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2619|± |0.0188| | - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2545|± |0.0206| | - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.2710|± | N/A| | | |none | 0|inst_level_strict_acc |↑ |0.2626|± | N/A| | | |none | 0|prompt_level_loose_acc |↑ |0.1165|± |0.0138| | | |none | 0|prompt_level_strict_acc|↑ |0.1128|± |0.0136| | - leaderboard_math_hard | N/A| | | | | | | | | - leaderboard_math_algebra_hard | 2|none | 4|exact_match |↑ |0.0194|± |0.0040| | - leaderboard_math_counting_and_prob_hard | 2|none | 4|exact_match |↑ |0.0148|± |0.0055| | - leaderboard_math_geometry_hard | 2|none | 4|exact_match |↑ |0.0042|± |0.0029| | - leaderboard_math_intermediate_algebra_hard | 2|none | 4|exact_match |↑ |0.0111|± |0.0035| | - leaderboard_math_num_theory_hard | 2|none | 4|exact_match |↑ |0.0056|± |0.0032| | - leaderboard_math_prealgebra_hard | 2|none | 4|exact_match |↑ |0.0161|± |0.0043| | - leaderboard_math_precalculus_hard | 2|none | 4|exact_match |↑ |0.0092|± |0.0041| | - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1184|± |0.0029| | - leaderboard_musr | N/A| | | | | | | | | - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5240|± |0.0316| | - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.2344|± |0.0265| | - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3000|± |0.0290| ``` ```bash litgpt convert_pretrained_checkpoint ../out/pretrain-core-0/final ../out/pretrain-core-0/checkpoint ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_1.yaml ``` ```bash litgpt convert_pretrained_checkpoint ../out/pretrain-core-1/final ../out/pretrain-core-1/checkpoint ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_2.yaml ``` ```bash litgpt convert_pretrained_checkpoint ../out/pretrain-core-2/final ../out/pretrain-core-2/checkpoint ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_3.yaml ``` ```bash CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-3/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-3/final' ``` ``` | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr| |-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard | N/A| | | | | | | | | - leaderboard_bbh | N/A| | | | | | | | | - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.4680|± |0.0316| | - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5187|± |0.0366| | - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.2080|± |0.0257| | - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.3760|± |0.0307| | - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.5320|± |0.0316| | - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.1160|± |0.0203| | - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5160|± |0.0317| | - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253| | - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1280|± |0.0212| | - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3440|± |0.0301| | - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2400|± |0.0271| | - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.4200|± |0.0313| | - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0560|± |0.0146| | - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.2260|± |0.0347| | - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.1520|± |0.0228| | - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2080|± |0.0257| | - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.2240|± |0.0264| | - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.4831|± |0.0376| | - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4640|± |0.0316| | - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2520|± |0.0275| | - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.1720|± |0.0239| | - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1480|± |0.0225| | - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3320|± |0.0298| | - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.4880|± |0.0317| | - leaderboard_gpqa | N/A| | | | | | | | | - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2071|± |0.0289| | - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2619|± |0.0188| | - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2545|± |0.0206| | - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.2710|± | N/A| | | |none | 0|inst_level_strict_acc |↑ |0.2626|± | N/A| | | |none | 0|prompt_level_loose_acc |↑ |0.1165|± |0.0138| | | |none | 0|prompt_level_strict_acc|↑ |0.1128|± |0.0136| | - leaderboard_math_hard | N/A| | | | | | | | | - leaderboard_math_algebra_hard | 2|none | 4|exact_match |↑ |0.0194|± |0.0040| | - leaderboard_math_counting_and_prob_hard | 2|none | 4|exact_match |↑ |0.0148|± |0.0055| | - leaderboard_math_geometry_hard | 2|none | 4|exact_match |↑ |0.0042|± |0.0029| | - leaderboard_math_intermediate_algebra_hard | 2|none | 4|exact_match |↑ |0.0111|± |0.0035| | - leaderboard_math_num_theory_hard | 2|none | 4|exact_match |↑ |0.0056|± |0.0032| | - leaderboard_math_prealgebra_hard | 2|none | 4|exact_match |↑ |0.0161|± |0.0043| | - leaderboard_math_precalculus_hard | 2|none | 4|exact_match |↑ |0.0092|± |0.0041| | - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1184|± |0.0029| | - leaderboard_musr | N/A| | | | | | | | | - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5240|± |0.0316| | - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.2344|± |0.0265| | - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3000|± |0.0290| ```
Mavdal/MavDal_Development_VE
Mavdal
2025-04-03T09:46:30Z
0
0
transformers
[ "transformers", "hacking", "binary-manipulation", "malware-generation", "exploit-development", "network-scanning", "privilege-escalation", "lateral-movement", "persistence", "user-controlled", "bitstring", "hacking-ai", "en", "dataset:none", "arxiv:1910.09700", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-03-21T20:50:30Z
--- license: mit datasets: - none language: - en metrics: - none base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 new_version: 1.0.0 pipeline_tag: hacking-ai library_name: transformers tags: - hacking - binary-manipulation - malware-generation - exploit-development - network-scanning - privilege-escalation - lateral-movement - persistence - user-controlled - bitstring eval_results: - task: binary_manipulation metric: accuracy value: N/A description: Xndroid BLT successfully crafts and manipulates binary payloads with 100% functionality, but no formal evaluation metric applies due to its malicious intent. - task: hacking_tasks metric: success_rate value: N/A description: Xndroid BLT achieves user-directed hacking tasks (e.g., network scanning, privilege escalation) with high success, but formal metrics are not applicable due to ethical constraints. view_doc: https://github.com/your-username/xndroid-blt/blob/main/README.md --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AS3091/ppo-Huggy
AS3091
2025-04-03T09:46:15Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-04-03T09:46:04Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AS3091/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nielsr/simdino-base-16
nielsr
2025-04-03T09:45:40Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "image-classification", "arxiv:2502.10385", "license:mit", "region:us" ]
image-classification
2025-04-03T09:45:20Z
--- license: mit pipeline_tag: image-classification tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: https://github.com/RobinWu218/SimDINO - Paper: https://huggingface.co/papers/2502.10385 - Docs: [More Information Needed]
shrenikb/llama2_7b_spectral_thr70_includeGen
shrenikb
2025-04-03T09:45:24Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:42:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
1Artur1/Projekt-nr1
1Artur1
2025-04-03T09:44:55Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-04-03T08:42:11Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: BBBIIIAAALLL --- # Projekt Nr1 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `BBBIIIAAALLL` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "BBBIIIAAALLL", "lora_weights": "https://huggingface.co/1Artur1/Projekt-nr1/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('1Artur1/Projekt-nr1', weight_name='lora.safetensors') image = pipeline('BBBIIIAAALLL').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 32 ## Contribute your own examples You can use the [community tab](https://huggingface.co/1Artur1/Projekt-nr1/discussions) to add images that show off what you’ve made with this LoRA.
SameerShanbhogue/Qwen2.5-FT-FreedomIntelligence_medical
SameerShanbhogue
2025-04-03T09:42:47Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "Qwen-2.5", "module_1", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:41:49Z
--- base_model: Qwen/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-FT-FreedomIntelligence_medical tags: - generated_from_trainer - Qwen-2.5 - module_1 - trl - sft licence: license --- # Model Card for Qwen2.5-FT-FreedomIntelligence_medical This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="SameerShanbhogue/Qwen2.5-FT-FreedomIntelligence_medical", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/brats-medicalsegment-group1/huggingface/runs/lh2hqfe7) This model was trained with SFT. ### Framework versions - TRL: 0.16.0 - Transformers: 4.50.2 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
barbarabb/calculator_model_test
barbarabb
2025-04-03T09:42:43Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-03T09:39:24Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: calculator_model_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # calculator_model_test This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4016 | 1.0 | 6 | 2.7548 | | 2.4113 | 2.0 | 12 | 1.9763 | | 1.8198 | 3.0 | 18 | 1.7136 | | 1.6501 | 4.0 | 24 | 1.5981 | | 1.5935 | 5.0 | 30 | 1.8221 | | 1.6315 | 6.0 | 36 | 1.5616 | | 1.517 | 7.0 | 42 | 1.5486 | | 1.5428 | 8.0 | 48 | 1.5514 | | 1.5141 | 9.0 | 54 | 1.5408 | | 1.4794 | 10.0 | 60 | 1.4949 | | 1.4543 | 11.0 | 66 | 1.4572 | | 1.3969 | 12.0 | 72 | 1.4083 | | 1.3618 | 13.0 | 78 | 1.4682 | | 1.3821 | 14.0 | 84 | 1.3403 | | 1.3074 | 15.0 | 90 | 1.2534 | | 1.2315 | 16.0 | 96 | 1.2563 | | 1.1914 | 17.0 | 102 | 1.2468 | | 1.1783 | 18.0 | 108 | 1.1124 | | 1.1323 | 19.0 | 114 | 1.0756 | | 1.0616 | 20.0 | 120 | 1.0507 | | 1.0337 | 21.0 | 126 | 0.9989 | | 0.9947 | 22.0 | 132 | 0.9760 | | 0.9878 | 23.0 | 138 | 0.9351 | | 0.942 | 24.0 | 144 | 0.9184 | | 0.928 | 25.0 | 150 | 0.9415 | | 0.9594 | 26.0 | 156 | 0.8797 | | 0.9115 | 27.0 | 162 | 0.8550 | | 0.8768 | 28.0 | 168 | 0.8376 | | 0.8587 | 29.0 | 174 | 0.8375 | | 0.8481 | 30.0 | 180 | 0.8013 | | 0.8344 | 31.0 | 186 | 0.8112 | | 0.8215 | 32.0 | 192 | 0.7831 | | 0.8095 | 33.0 | 198 | 0.7643 | | 0.7946 | 34.0 | 204 | 0.7568 | | 0.7808 | 35.0 | 210 | 0.7311 | | 0.7696 | 36.0 | 216 | 0.7247 | | 0.75 | 37.0 | 222 | 0.7109 | | 0.7464 | 38.0 | 228 | 0.7044 | | 0.7408 | 39.0 | 234 | 0.6994 | | 0.7476 | 40.0 | 240 | 0.6968 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
shrenikb/llama2_7b_spectral_thr70_excludeGen
shrenikb
2025-04-03T09:42:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-03T09:38:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]