modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-10 18:26:12
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
419 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-10 18:25:20
card
stringlengths
11
1.01M
Roflmax/BGE-merged-results-MNRLoss-checkpoint-1980
Roflmax
"2025-04-01T11:23:17Z"
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-04-01T11:21:55Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
UWECProgrammer/setfit-model-v2
UWECProgrammer
"2023-08-23T02:26:18Z"
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
"2023-08-22T22:24:20Z"
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # UWECProgrammer/setfit-model-v2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("UWECProgrammer/setfit-model-v2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
VijayaKrishnaRamesh/Reinforce-pixelcoptor
VijayaKrishnaRamesh
"2024-02-25T19:13:07Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-02-25T19:13:05Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcoptor results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 17.40 +/- 13.82 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
korbih/OS-Atlas-Base-7B-LoRA-aguvis_1k_trajectories_full_history_with_eval_1024_v1.1_lora_adapter
korbih
"2025-03-03T20:23:11Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:OS-Copilot/OS-Atlas-Base-7B", "base_model:adapter:OS-Copilot/OS-Atlas-Base-7B", "license:other", "region:us" ]
null
"2025-03-03T20:23:02Z"
--- library_name: peft license: other base_model: OS-Copilot/OS-Atlas-Base-7B tags: - llama-factory - lora - generated_from_trainer model-index: - name: OS-Atlas-Base-7B-LoRA-aguvis_1k_trajectories_full_history_with_eval_1024_v1.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OS-Atlas-Base-7B-LoRA-aguvis_1k_trajectories_full_history_with_eval_1024_v1.1 This model is a fine-tuned version of [OS-Copilot/OS-Atlas-Base-7B](https://huggingface.co/OS-Copilot/OS-Atlas-Base-7B) on the train dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.12.0 - Transformers 4.49.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
rooftopcoder/t5-small-coqa
rooftopcoder
"2023-09-26T05:52:24Z"
24
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-05-12T08:19:34Z"
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 base_model: t5-small model-index: - name: t5-small-coqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-coqa This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0055 - Accuracy: 0.0777 - F1: 0.0501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
cleanrl/WizardOfWor-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1
cleanrl
"2023-02-22T23:02:15Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "WizardOfWor-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-22T23:02:13Z"
--- tags: - WizardOfWor-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: WizardOfWor-v5 type: WizardOfWor-v5 metrics: - type: mean_reward value: 11680.00 +/- 5515.94 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **WizardOfWor-v5** This is a trained model of a PPO agent playing WizardOfWor-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id WizardOfWor-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id WizardOfWor-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 15360, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'WizardOfWor-v5', 'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn', 'gae_lambda': 0.95, 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 7680, 'local_minibatch_size': 1920, 'local_num_envs': 60, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 3840, 'norm_adv': True, 'num_envs': 120, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 3255, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 2} ```
Kelthic/Zealot_Voice_Pack_Female
Kelthic
"2023-11-09T16:28:17Z"
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2023-11-09T15:29:40Z"
--- license: cc-by-nc-sa-4.0 ---
MattBoraske/ppo-SnowballTarget1
MattBoraske
"2023-06-02T14:50:52Z"
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
"2023-06-02T14:50:47Z"
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: MattBoraske/ppo-SnowballTarget1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
zakariamtl/anas.ahla05
zakariamtl
"2025-03-30T14:43:04Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-03-30T14:43:03Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
linbin1973/bert-finetuned-deid-clean
linbin1973
"2024-10-22T02:32:50Z"
106
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-10-22T02:23:10Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-deid-clean results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-deid-clean This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0234 - Precision: 0.9423 - Recall: 0.9208 - F1: 0.9314 - Accuracy: 0.9950 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 430 | 0.0282 | 0.9035 | 0.8840 | 0.8937 | 0.9938 | | 0.0861 | 2.0 | 860 | 0.0236 | 0.9384 | 0.9176 | 0.9279 | 0.9948 | | 0.0127 | 3.0 | 1290 | 0.0234 | 0.9423 | 0.9208 | 0.9314 | 0.9950 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0 - Datasets 2.21.0 - Tokenizers 0.19.1
Vishal24/tinyllama_review_summary_adapter_v1
Vishal24
"2024-01-04T10:38:22Z"
5
0
peft
[ "peft", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
"2024-01-04T10:18:16Z"
--- library_name: peft base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Infrence function def generate(review,category): # Define the roles and markers # Define the roles and markers B_INST, E_INST = "[INST]", "[/INST]" B_RW, E_RW = "[RW]", "[/RW]" user_prompt = f'Summarize the reviews for {category} category.' ### custom prompt here # Format your prompt template # prompt = f"{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST} {user_prompt.strip()} {E_INST} Hello! Life is good, thanks for asking {B_INST} {user_prompt2.strip()} {E_INST} The most fun dog is the Labrador Retriever {B_INST} {user_prompt3.strip()} {E_INST}\n\n" prompt = f"{B_INST} {user_prompt.strip()} {E_INST}\n\n {B_RW} {review.strip()} {E_RW}\n" print("Prompt:") print(prompt) encoding = tokenizer(prompt, return_tensors="pt").to("cuda:0") output = model.generate(input_ids=encoding.input_ids, attention_mask=encoding.attention_mask, max_new_tokens=200, do_sample=True, temperature=0.01, eos_token_id=tokenizer.eos_token_id, top_k=0) print() # Subtract the length of input_ids from output to get only the model's response output_text = tokenizer.decode(output[0, len(encoding.input_ids[0]):], skip_special_tokens=False) output_text = re.sub('\n+', '\n', output_text) # remove excessive newline characters print("Generated Assistant Response:") print(output_text) return output_text
hgnoi/rJpqxqqKp62fQEtW
hgnoi
"2024-05-22T00:26:26Z"
123
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-22T00:24:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lesso14/4701a19a-58b7-485a-b73f-65208feab2e1
lesso14
"2025-01-21T09:21:58Z"
6
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-1.1-2b-it", "base_model:adapter:unsloth/gemma-1.1-2b-it", "license:apache-2.0", "region:us" ]
null
"2025-01-21T09:20:42Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-1.1-2b-it tags: - axolotl - generated_from_trainer model-index: - name: 4701a19a-58b7-485a-b73f-65208feab2e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-1.1-2b-it bf16: true chat_template: llama3 datasets: - data_files: - b0a90a180413a729_train_data.json ds_type: json format: custom path: /workspace/input_data/b0a90a180413a729_train_data.json type: field_instruction: act field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso14/4701a19a-58b7-485a-b73f-65208feab2e1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/b0a90a180413a729_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_hf output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: af867abb-dc21-418c-b2ea-377bfca87044 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: af867abb-dc21-418c-b2ea-377bfca87044 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 4701a19a-58b7-485a-b73f-65208feab2e1 This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it](https://huggingface.co/unsloth/gemma-1.1-2b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.8191 | 0.0080 | 1 | 2.8935 | | 3.0065 | 0.0401 | 5 | 2.7892 | | 2.1575 | 0.0802 | 10 | 2.0506 | | 1.6816 | 0.1202 | 15 | 1.6901 | | 1.5554 | 0.1603 | 20 | 1.5767 | | 1.4907 | 0.2004 | 25 | 1.5526 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
culyonana/72b7ff3c-1335-481e-a39c-3f337b41e402
culyonana
"2025-04-07T10:33:55Z"
0
0
null
[ "region:us" ]
null
"2025-04-07T10:05:02Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
mradermacher/bakeneko-GRPO-2-GGUF
mradermacher
"2025-02-27T23:39:19Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:ryunsuke/bakeneko-GRPO-2", "base_model:quantized:ryunsuke/bakeneko-GRPO-2", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-27T17:10:04Z"
--- base_model: ryunsuke/bakeneko-GRPO-2 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ryunsuke/bakeneko-GRPO-2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q5_K_M.gguf) | Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/bakeneko-GRPO-2-GGUF/resolve/main/bakeneko-GRPO-2.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
research-dump/Meta-Llama-3-8B-Instruct_mixed_sft_lexical_instruction_final_full
research-dump
"2024-06-11T11:20:50Z"
89
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-classification
"2024-06-11T11:17:10Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NiiCole/videomae-base-finetuned-ucf101-subset
NiiCole
"2023-11-20T16:36:16Z"
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2022-12-19T00:45:33Z"
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4799 - Accuracy: 0.8645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6374 | 0.2 | 300 | 1.5436 | 0.4143 | | 1.4888 | 1.2 | 600 | 1.4969 | 0.6 | | 0.5068 | 2.2 | 900 | 1.2462 | 0.7571 | | 1.0515 | 3.2 | 1200 | 0.8028 | 0.7714 | | 0.0057 | 4.2 | 1500 | 0.3585 | 0.9429 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
KarMa001/model_out_v1-5
KarMa001
"2024-01-29T12:25:13Z"
1
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2024-01-29T12:18:57Z"
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-KarMa001/model_out_v1-5 These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
ostapeno/newt_adaNeo1B_niv2_explanation_svdemb_sgd_full_ft_CG
ostapeno
"2024-01-08T08:57:30Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T16:08:42Z"
Number of experts present in the library: 10 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | niv2_explanation_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v8 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v7 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v6 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | | niv2_explanation_v9 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora | Last updated on: 2024-01-08 08:57:27+00:00
ILKT/2024-06-24_22-31-18_epoch_4
ILKT
"2024-06-28T14:32:55Z"
141
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-06-24T21:46:21Z"
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_4 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 22.306163021868784 - type: f1 value: 20.236487626058857 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.68000000000001 - type: ap value: 14.73726623742049 - type: f1 value: 45.190406815153224 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 14.477465473489056 - type: v_measure_std value: 1.2451504858169187 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 29.603227975790187 - type: f1 value: 26.912672734118765 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 30.250860796851942 - type: f1 value: 27.119957429866933 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 35.53799596503026 - type: f1 value: 33.170354622674765 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 35.40088539104772 - type: f1 value: 33.52216405101386 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 61.132348682305235 - type: ap value: 72.63375062740438 - type: f1 value: 58.53955276732978 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 35.49887039263635 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 31.779790766120197 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 44.25207756232686 - type: f1 value: 45.16348806946095 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 19.19028340080972 - type: f1 value: 14.783737091434995 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
onekat/Model_4
onekat
"2024-12-01T17:46:47Z"
104
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-01T15:58:43Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sdpkjc/Swimmer-v4-sac_continuous_action-seed5
sdpkjc
"2023-12-19T09:41:18Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "Swimmer-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-12-19T09:41:12Z"
--- tags: - Swimmer-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: SAC results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Swimmer-v4 type: Swimmer-v4 metrics: - type: mean_reward value: 46.60 +/- 1.07 name: mean_reward verified: false --- # (CleanRL) **SAC** Agent Playing **Swimmer-v4** This is a trained model of a SAC agent playing Swimmer-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[sac_continuous_action]" python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Swimmer-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed5/raw/main/sac_continuous_action.py curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed5/raw/main/pyproject.toml curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed5/raw/main/poetry.lock poetry install --all-extras python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 5 --track ``` # Hyperparameters ```python {'alpha': 0.2, 'autotune': True, 'batch_size': 256, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'env_id': 'Swimmer-v4', 'exp_name': 'sac_continuous_action', 'gamma': 0.99, 'hf_entity': 'sdpkjc', 'learning_starts': 5000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'policy_lr': 0.0003, 'q_lr': 0.001, 'save_model': True, 'seed': 5, 'target_network_frequency': 1, 'tau': 0.005, 'torch_deterministic': True, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
PrunaAI/deit3_medium_patch16_224.fb_in22k_ft_in1k-turbo-tiny-green-smashed
PrunaAI
"2024-08-02T15:28:40Z"
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
"2024-03-07T18:10:22Z"
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir deit3_medium_patch16_224.fb_in22k_ft_in1k-turbo-tiny-green-smashed huggingface-cli download PrunaAI/deit3_medium_patch16_224.fb_in22k_ft_in1k-turbo-tiny-green-smashed --local-dir deit3_medium_patch16_224.fb_in22k_ft_in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "deit3_medium_patch16_224.fb_in22k_ft_in1k-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "deit3_medium_patch16_224.fb_in22k_ft_in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model deit3_medium_patch16_224.fb_in22k_ft_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
tjluyao/llama-3-8b
tjluyao
"2024-05-02T23:01:31Z"
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-02T21:27:59Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chienweichang/Llama3-TAIDE-LX-8B-Chat-Alpha1-AWQ
chienweichang
"2024-05-02T05:14:12Z"
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
"2024-05-02T02:47:53Z"
--- license: other --- # Llama3-TAIDE-LX-8B-Chat-Alpha1 - AWQ - Model creator: [taide](https://huggingface.co/taide) - Original model: [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1) <!-- description start --> ## Description This repo contains AWQ model files for [taide/Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Original model card --- * [English Version](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1/blob/main/README_en.md) # 模型簡介 * [TAIDE計畫](https://taide.tw/index)致力於開發符合台灣語言和文化特性的生成式人工智慧對話引擎模型,同時建構可信任的人工智慧環境。結合產學研能量,推動可信任生成式人工智慧的發展,提升台灣在國際競爭中的地位,促進產業發展,避免對外國技術的依賴。 * Llama3 TAIDE 系列模型以 Meta 公司釋出的 [LLaMA3-8b](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) 為基礎,導入台灣不同領域可用的文本與訓練素材,提高模型在正體中文回應的能力與特定任務的表現。公開釋出的模型如下: * [Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1): 以 LLaMA3-8b 為基礎,使用繁體中文資料預訓練 (continuous pretraining),並透過指令微調(instruction tuning)強化辦公室常用任務和多輪問答對話能力,適合聊天對話或任務協助的使用情境。Llama3-TAIDE-LX-8B-Chat-Alpha1 另外有提供[4 bit 量化模型](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit),量化模型主要是提供使用者的便利性,可能會影響效能與更多不可預期的問題,還請使用者理解與注意。 # 模型參數 * 參數量: 8B * 最大內容長度 (context length): 8K * 繁中訓練資料 token 量: 43B * 訓練時間: 2336 H100 GPU Hours # 特色 * 嚴格把關模型的訓練資料,提升模型生成資料的可信任性和適用性 * 針對自動摘要、寫信、寫文章、中翻英、英翻中等辦公室常用任務做加強 * 針對台灣在地文化、用語、國情等知識做加強 * 具備多輪問答對話能力 # 應用範例 | 任務 | 使用者輸入 | 模型輸出 | | --- | -------- | ------- | | 寫文章 | 請以以下內容為基礎,寫一篇文章:撰寫一篇作文,題目為《一張舊照片》,內容要求為:選擇一張令你印象深刻的照片,說明令你印象深刻的原因,並描述照片中的影像及背後的故事。記錄成長的過程、與他人的情景、環境變遷和美麗的景色。 | 一張舊照片,承載著時光的痕跡和回憶的溫度。我還記得那張照片是怎麼出現的,那是一張泛黃的黑白相片,裱在牆上,訴說著一段屬於我家族的故事。<br><br>那張照片中,我的祖父母站在田野間,後方是綿延的山巒,前方則是一片金黃的稻田。這幅景象令我印象深刻,因為這是我的童年回憶,也是我成長過程中最美好的風景線。<br><br>我出生在台灣鄉下,從小就生活在這樣一個純樸的環境中。每當我跟著家人在田裡工作時,就會看到這樣的美景。金黃的稻穗隨著微風搖曳,山巒則靜靜地矗立在遠方,給人一種安詳與祥和的感覺。<br><br>那張照片拍攝於民國50年代,當時台灣經濟尚未起飛,農業為主要產業。我的祖父母都是農民,他們辛勤耕種,為我們家提供溫飽。每當我看到那張照片,就會想起他們為我們所做的犧牲和努力。<br><br>除了美麗的風景外,照片中還有一段動人的故事。在我還很小的時候,家裡發生了一場大火,將我們的房子和所有財產都燒光了。幸好全家人都平安無事,但生活卻變得十分困苦。我們被迫搬到另一個村莊,在那裡重新開始生活。<br><br>那張照片也記錄了那段艱辛的歲月。雖然我們失去了許多東西,但家人之間的感情卻更加深厚。我們互相扶持,共同面對生活上的種種挑戰。<br><br>如今,我已長大成人,在都市裡工作。但每當我看到那張舊照片,我都會想起那段簡單而美好的時光。那些美麗的風景、親人的笑容,以及那段充滿挑戰卻又充滿希望的日子,都將永遠鐫刻在我的記憶深處。<br><br>這張舊照片,不僅僅是一張照片,更是一個寶貴的家族史,一個關於我們如何克服困難、珍惜現在的故事。我將把這張照片好好保存下來,讓後代也能夠認識我們的根源,並且銘記那些值得珍藏的美好回憶。 | | 寫信 | 請以品牌經理的身份,給廣告公司的創意總監寫一封信,提出對於新產品廣告宣傳活動的創意建議。 | [您的姓名]<br>[您的職稱]<br>[您的公司名稱]<br>[郵件地址]<br>[電話號碼]<br>[發文日期]<br><br>[創意總監姓名]<br>[廣告公司名稱]<br>[廣告公司地址]<br><br>主旨:關於新產品宣傳活動創意發想,請查照。<br><br>尊敬的[創意總監姓名],<br><br>我希望這封信能找到你忙碌中好好休息。我寫這封信,是為了分享我對於我們即將推出的新產品宣傳活動一些創意發想。我深信,這些想法能幫助我們新產品在市場上脫穎而出,吸引消費者的目光。<br><br>首先,我建議我們可以利用社群媒體平台,像是Facebook、Instagram、YouTube等,製作一系列引人入勝的短片和貼文。這些內容可以展現新產品的特色和優點,同時也能傳達出我們品牌重視環保、健康的核心價值。我們可以與網紅或意見領袖合作,讓他們試用並分享他們的使用經驗,這將有助於增加我們產品的可信度和能見度。<br><br>其次,我們可以舉辦新產品試吃/試用活動,邀請民眾親身體驗我們的新產品。我們可以在各大百貨公司、超市、以及我們的門市舉辦這些活動,讓參與者免費試吃/試用新產品,並填寫問卷回饋意見。這種互動的方式,不僅能收集到寶貴的市場反饋,還能讓民眾對我們的品牌產生正面的印象。<br><br>第三,我們可以與當地餐飲業者合作,將我們的新產品納入他們的菜單中。這將使我們的產品接觸到更廣泛的客群,同時也能證明我們的產品在烹飪和美食產業中具有價值。<br><br>第四,我們可以舉辦一場新產品發表會,邀請媒體、業界人士、以及重要客戶參加。在這場活動中,我們可以展示新產品的各項特色和優勢,並由專業廚師示範如何將新產品融入各式料理中。這將是向大眾展示我們品牌創新精神和專業能力的絕佳機會。<br><br>最後,我建議我們可以製作一系列精美的包裝和禮盒,以吸引消費者在節慶和送禮時選擇我們的產品。精緻的包裝將能提升我們產品的價值感,同時也能傳達出我們品牌對細節和品質的堅持。<br><br>我深信,這些創意發想將有助於我們新產品在市場上獲得成功。若您對這些想法有任何意見或建議,請不吝與我分享。我期待著與您合作,共同為[您的公司名稱]的成長和成功而努力。<br><br>謝謝您抽空閱讀我的信,並期待著與您進一步討論這些想法。<br><br>敬禮<br><br>[您的姓名]<br>[您的職稱]<br>[您的公司名稱] | | 摘要 | 請將這篇文章精簡條理化:「產業創新條例第10條之2及第72條條文修正案」俗稱「台版晶片法」,針對半導體、電動車、5G等技術創新且居國際供應鏈關鍵地位公司,提供最高25%營所稅投抵優惠,企業適用要件包含當年度研發費用、研發密度達一定規模,且有效稅率達一定比率。<br>為因應經濟合作暨發展組織(OECD)國家最低稅負制調整,其中有效稅率門檻,民國112年訂為12%,113年料將提高至15%,但仍得審酌國際間最低稅負制實施情形。<br>經濟部官員表示,已和財政部協商進入最後階段,除企業研發密度訂在6%,目前已確認,企業購置先進製程的設備投資金額達100億元以上可抵減。<br>財政部官員表示,研商過程中,針對台灣產業與其在國際間類似的公司進行深入研究,在設備部分,畢竟適用產創10之2的業者是代表台灣隊打「國際盃」,投入金額不達100億元,可能也打不了。<br>至於備受關注的研發費用門檻,經濟部官員表示,歷經與財政部來回密切討論,研發費用門檻有望落在60億至70億元之間。<br>財政部官員指出,研發攸關台灣未來經濟成長動能,門檻不能「高不可攀」,起初雖設定在100億元,之所以會調降,正是盼讓企業覺得有辦法達得到門檻、進而適用租稅優惠,才有動力繼續投入研發,維持國際供應鏈關鍵地位。<br>經濟部官員表示,因廠商研發費用平均為30、40億元,其中,IC設計業者介於30億至60億元範圍,若將門檻訂在100億元,符合條件的業者較少、刺激誘因不足;此外,若符合申請門檻的業者增加,將可提高企業在台投資金額,財政部稅收也能因此獲得挹注。<br>IC設計業者近日頻頻針對產創10之2發聲,希望降低適用門檻,加上各國力拚供應鏈自主化、加碼補助半導體產業,經濟部官員表示,經濟部和財政部就產創10之2達成共識,爭取讓更多業者受惠,盼增強企業投資力道及鞏固台灣技術地位。<br>財政部官員表示,租稅獎勵的制定必須「有為有守」,並以達到獎勵設置目的為最高原則,現階段在打「國內盃」的企業仍可適用產創第10條、10之1的租稅優惠,共同壯大台灣經濟發展。<br>經濟部和財政部正就研發費用門檻做最後確認,待今明兩天預告子法之後,約有30天時間,可與業界進一步討論及調整,盼產創10之2能在6月上路。 | 文章重點摘要如下:<br><br>- 「台版晶片法」或《產業創新條例第10條之2及第72條條文修正案》,旨在鼓勵關鍵技術創新企業,特別是那些在半導體、電動車、5G技術等領域且在國際供應鏈中扮演關鍵角色者。<br><br>- 企業若符合特定條件,包括當年度研發費用及研發密度達標,可享最高25%的營所稅抵減優惠。研發密度目標設為6%,而企業為取得抵減,必須證明其有效稅率達到特定比率,起初設定為112年的12%,113年提高至15%,但可能會依國際稅制的實施情形進行調整。<br><br>- 政府在制定政策時,考量到OECD最低稅負制的調整,因此會視情況調整有效稅率門檻。<br><br>- 政府部門已就法案內容與企業利益相關者進行協商,確定一些細節,例如企業投資先進製程設備的門檻設為100億元新台幣以供抵減。<br><br>- 研發費用門檻經過討論,可能從最初提議的100億元下調至60億至70億元之間,以激勵更多企業符合資格並投入研發,維持台灣在國際供應鏈中的關鍵地位。<br><br>- 經濟部官員表示,調降門檻的目的是讓更多企業覺得能夠達成標準,從而獲得應用租稅優惠的動力,並可能藉此提高在台投資金額,進而增加政府稅收。<br><br>- 由於IC設計業者的研發費用普遍較低,若門檻設得過高,符合條件的企業將很少,因此有必要調整。<br><br>- 在各國推動供應鏈自主化並增加對半導體產業的補助之際,經濟部正努力確保更多企業能從《產業創新條例》第10條之2中受益,以增強投資和鞏固台灣的技術地位。<br><br>- 政府強調租稅優惠的制定應符合獎勵目標,在此同時,也將繼續提供其他租稅優惠給在國內經營的企業,以促進台灣經濟成長。<br><br>- 最終的研發費用門檻將由經濟部和財政部共同確定,在預告相關子法後,將有約30天的時間供各界討論並提出調整建議,目標是讓這項新政策在6月上路。 | | 翻譯 | 以下提供英文內容,請幫我翻譯成中文。Dongshan coffee is famous for its unique position, and the constant refinement of production methods. The flavor is admired by many caffeine afficionados. | 東山咖啡以獨特的位置,加上不斷精進的製作手法而聞名,風味更是讓許多咖啡癡趨之若鶩。 | # 使用說明 * 範例程式 * 範例程式與文件後續會釋出到github * prompt 樣版 * 一般問答用法 ```python chat = [ {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {question} 替換成使用者的輸入 * 加入 system prompt 的用法 ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {sys} 替換成指令,例如:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。 * 將 {question} 替換成使用者的問題 * 多輪問答用法 ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question1}"}, {"role": "assistant", "content": "{model_anwer_1}"}, {"role": "user", "content": "{question2}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {sys} 替換成指令,例如:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。 * 將 {question1} 替換成使用者的問題1 * 將 {model_anwer_1} 替換成模型的回答1 * 將 {question2} 替換成使用者的問題2 * 更多細節請參考[Llama3 文件](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) # 訓練方法 * 軟硬體規格 * 國網中心 H100 * 訓練框架: PyTorch * 資料前處理 * 字元標準化 * 去除重覆 * 去除雜訊 * 網頁資料的html tag、javascript * 非標準字元或亂碼 * 字數過短的文章 * 去除文章中的特定格式,如為排版增加的換行 * 去除個資,如email、電話 * 去除不當文字,如賭博、色情等 * 持續預訓練 (continuous pretraining, CP) * 補充大量來源可信賴的繁體中文知識 * 超參數 (hyper parameters) * optimizer: AdamW * learning rate: 1e-4 * batch size: 1M tokens * epoch: 1 * 微調 (fine tune, FT) * 讓模型可針對繁體中文提問回答問題 * 超參數 (hyper parameters) * optimizer: AdamW * learning rate: 5e-5 * batch size: 256K tokens * epoch: 3 # 訓練資料 * 持續預訓練資料(資料量約為140G) | 資料集 | 資料描述 | | --- | -------- | | 訴訟資料 | 《[司法院裁判書](https://judgment.judicial.gov.tw/FJUD/default.aspx)》自2013年1月至2023年12月各級法院民事、刑事、行政訴訟資料。 | | 中央社 | 《[中央社中文新聞](https://www.cna.com.tw/)》資料集含中央社自1993年6月至2023年06月,共30年份之每日新聞文章。內容涵蓋國內外政治、社會、財經、文教、生活等領域。 | | ETtoday 新聞雲 | 《[ETtoday新聞雲](https://www.ettoday.net/)》資料,包含自2011年10月至 2023年12月的資料。 | | 立法院公報 | 《[立法院公報](https://ppg.ly.gov.tw/ppg/)》包含自第8屆第1會期至第10屆第7會期之公報資料。 | | 出版商網站書籍介紹 | 包含[三采](https://www.suncolor.com.tw/)、[Gotop](https://www.gotop.com.tw/)出版商網站上的書籍簡介。 | | GRB 研究計畫摘要 | [GRB](https://www.grb.gov.tw/)為收錄由政府經費補助之研究計畫及其成果報告的資訊系統,此資料集主要收錄 1993年至 2023年之研究計畫摘要以及研究報告摘要,含中文及其英文對照。 | | 學術會議論文摘要 | 收錄《[學術會議論文摘要資料庫](https://sticnet.stpi.narl.org.tw/sticloc/ttscalle?meet:)》中自1988至2009年由台灣所舉辦之學術會議論文。 | | 光華雜誌 | 《[台灣光華雜誌](https://www.taiwan-panorama.com/)》含自1993年7月至2023年6月的文章,共30年份。內容著重於我國文化、觀光與民情等。 | | 樂詞網 | 《[樂詞網](https://terms.naer.edu.tw/)》涵蓋文理領域約187萬則學術名詞及其譯名對照。 | | 各部會資料 | 包含行政院「[國情簡介](https://www.ey.gov.tw/state/)」、文化部「[國家文化記憶庫](https://memory.culture.tw/)」、國發會「[檔案支援教學網](https://art.archives.gov.tw/index.aspx)」、交通部「[交通安全入口網](https://168.motc.gov.tw/)」等部會網站資料之部分資料。 | | 今周刊 | 《[今周刊](https://www.businesstoday.com.tw/)》為一以財經為主的週刊雜誌,此資料集涵蓋2008年1月至2023年7月的文章。 | | 教育部國語辭典、成語辭典 | 包含以下三項資料:<br>[教育部《成語典》](https://dict.idioms.moe.edu.tw/search.jsp?webMd=1&la=0),含5,338條成語,內容包含每條成語的釋義、典故原文及其白話說明、用法說明、例句等。<br>[教育部《重編國語辭典修訂本》](https://dict.revised.moe.edu.tw/?la=0&powerMode=0),收錄中文單字及各類辭彙,包含讀音、部首、釋義等資訊,共約165,539筆資料。<br>[教育部《國語辭典簡編本》](https://dict.concised.moe.edu.tw/?la=0&powerMode=0),為《重編國語辭典修訂本》的簡編版本,共45,247筆資料。 | | 科技大觀園資料 | 含《[科技大觀園網站](https://scitechvista.nat.gov.tw/)》上的科學新知以及科普文章。 | | iKnow 科技產業資訊室 | 《[科技產業資訊室](https://iknow.stpi.narl.org.tw/)(iKnow)》提供台灣及全球的科技市場趨勢、策略分析、專利知識,及技術交易資訊,專注於科技產業的創新與發展,包含自 2008 年至 2023 年。 | | 科學發展月刊 | 《[科學發展月刊](https://ejournal.stpi.narl.org.tw/sd)》為國科會為推廣科學教育而出版的科普刊物,含自2004年10月至2020年12月之科普文章;2021年起,以《[科技魅癮](https://www.charmingscitech.nat.gov.tw/)》季刊重新出發,提供國際關注科技議題的新知文章。 | | 法規資料庫 | 《[法規資料庫](https://law.moj.gov.tw/)》含截自 112 年 10 月各政府部門最新發布之中央法規、行政規則、法規命令草案及地方自治法規等。 | | 各地政府旅遊網 | 涵蓋台灣部分縣市地方政府觀光旅遊網站上之部分資料。 | | 國教院課程綱要(十二年國教) | 含十二年國教課程綱要之總綱以及各級學校不同科目之課程綱要。 | | 中央社譯名檔資料庫 | 《中央社譯名檔資料庫》蒐集中央社新聞業務上翻譯過的中外姓氏、人名、組織、地名等譯名對照。 | | 童話書 | 共 20 本童話書,含湯姆歷險記、小飛俠、愛麗絲夢遊仙境、長腿叔叔等。 | | RedPajama-Data-V2 | 從國外開放多國語言語料庫 [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data) 取出英文資料 | | MathPile-commercial | 國外開放數學語料庫 [MathPile-commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial) | | 中文維基百科 | 《[中文維基百科](https://zh.wikipedia.org/zh-tw/%E4%B8%AD%E6%96%87%E7%BB%B4%E5%9F%BA%E7%99%BE%E7%A7%91)》截至2023年1月所有條目的內容。 | | github-code-clean | 為 github 開源程式碼資料集,去除unlicense的程式碼和文件。 | * 微調資料 * TAIDE團隊訓練llama2系列模型來產生微調資料資料,產生的任務包含世界知識、創意寫作、普通常識、翻譯、摘要、程式、台灣價值等單輪或多輪對話問答共 128K 筆。微調資料後續會對外釋出。 # 模型評測 * taide-bench * 評測資料 * 寫文章、寫信、摘要、英翻中、中翻英,共500題 * 資料連結: [taide-bench](https://huggingface.co/datasets/taide/taide-bench) * 評測方法 * gpt4評分 * 評分程式: [taide-bench-eval](https://github.com/taide-taiwan/taide-bench-eval) * 評測分數 | 模型 | 中翻英 | 英翻中 | 摘要 | 寫文章 | 寫信 | 平均 | | --- | ----- | ----- | ---- | ---- | ---- | --- | | Llama3-TAIDE-LX-8B-Chat-Alpha1 | 7.770 | 8.280 | 8.495 | 9.605 | 8.950 | 8.620 | | GPT3.5 | 8.880 | 8.810 | 7.450 | 9.490 | 8.750 | 8.676 | | TAIDE-LX-7B-Chat | 7.165 | 7.685 | 7.720 | 9.635 | 9.110 | 8.263 | | LLAMA2 7B | 6.075 | 4.475 | 5.905 | 2.625 | 3.040 | 4.424 | | LLAMA2 13B | 6.480 | 6.135 | 6.110 | 2.565 | 3.000 | 4.858 | | LLAMA2 70B | 6.975 | 6.375 | 6.795 | 2.625 | 2.990 | 5.152 | # 授權條款 * [Llama3-TAIDE 模型社群授權同意書](https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view) # 免責聲明 * LLM 模型由於設計架構的限制,以及資料難免有偏誤,語言模型的任何回應不代表 TAIDE 立場,使用前需要額外加入安全防護機制,且回應內容也可能包含不正確的資訊,使用者請勿盡信。 # 開發團隊 * [https://taide.tw/index/teamList](https://taide.tw/index/teamList) # 相關連結 * [TAIDE官網](https://taide.tw/index) * [TAIDE Huggingface](https://huggingface.co/taide) * [TAIDE Github](https://github.com/taide-taiwan) * [Kuwa AI](https://kuwaai.org/) # Citation * [TAIDE官網](https://taide.tw/index)
Delcos/Velara
Delcos
"2023-12-13T23:41:35Z"
1,475
10
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "starling", "llama-2", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-12-05T05:07:35Z"
--- license: cc-by-nc-nd-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - starling - mistral - llama-2 --- <h1 style="font-size: 36px;">Velara-11b</h1> <center> <img src="https://i.imgur.com/2NK7cMc.png" alt="Velara" width="50%" style="min-width:100px; display:block;"> </center> ## Description: **PENDING** A Starling-based model focused on being an assistant worth talking to. **Uncensored** ## Main Goals: Velara was designed to address specific issues found in other chat models: - **Sticking to the Character Given In-World/IRL::** Velara will stick to the character in-world. Meaning she can "use" addons or other things by adding them to the prompt. Keep in mind these act as suggestions and she generally makes up what they actually "do". - **Staying in Context:** She will try and remember if, for example, you said you were going to bed but decided to stay up. The goal is to have her bring it up and keep it in mind, so later on in the conversation she can prompt user to actually go to bed, increasing realism. Within the current context window of course. The LTS extension in TextGen Webui works well too and provides more context to the model. - **Avoiding Annoying & Generic Sounding Answers::** In almost all cases she should reply with something more interesting then "That sounds like a great idea!". These answers and sentence starters are annoying and terrible, and should hopefully be mostly addressed in this release. - **STRICTLY Sticking to the Prompt/Prompt Fidelity:** More care was made to make sure she pays attention to the pretext and overall conversation. This is mostly shown and used for the "addons" section, but it applies to everything in the prompt. - **Addons, Sort Of:** Something I had tried in the past and wanted to add to a model deliberately was the ability to give it "addons". You can add any amount of "addons" to Velara's mind. These aren't directly controllable. You can then ask her what they do, or if you have the LTS installed, ask her and change the reply to what you'd want it to do. ## Prompt Template: **For optimal interaction, use this template:** ``` ### Instruction: You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1". World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User. Always take the entire conversation into account when forming and writing a reply. ### Response: ``` # Recommended Settings: **Defaults:** ``` temperature: 0.8 top_p: 0.85 top_k: 20 repetition_penalty: 1.2 guidance_scale: 1.25 ``` **Worse context understanding but less repetitive in some cases:** ``` temperature: 0.77 top_p: 0.85 top_k: 20 repetition_penalty: 1.2 ``` # Benchmarks: PENDING # Training Data: PENDING
evilfreelancer/ruGPT-3.5-13B-ggml
evilfreelancer
"2023-10-29T12:03:23Z"
17
8
transformers
[ "transformers", "gpt2", "ruGPT", "conversational", "ru", "en", "dataset:IlyaGusev/ru_turbo_alpaca", "dataset:IlyaGusev/ru_turbo_alpaca_evol_instruct", "dataset:IlyaGusev/ru_turbo_saiga", "dataset:IlyaGusev/ru_sharegpt_cleaned", "dataset:IlyaGusev/oasst1_ru_main_branch", "dataset:IlyaGusev/gpt_roleplay_realm", "dataset:lksy/ru_instruct_gpt4", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
"2023-10-08T12:40:43Z"
--- license: mit datasets: - IlyaGusev/ru_turbo_alpaca - IlyaGusev/ru_turbo_alpaca_evol_instruct - IlyaGusev/ru_turbo_saiga - IlyaGusev/ru_sharegpt_cleaned - IlyaGusev/oasst1_ru_main_branch - IlyaGusev/gpt_roleplay_realm - lksy/ru_instruct_gpt4 language: - ru - en pipeline_tag: conversational tags: - ruGPT --- # ruGPT-3.5 13B GGML Welcome to the adapter-only version of ruGPT-3.5 13B GGML. This model is built upon the foundation of [ruGPT-3.5-13B](https://huggingface.co/ai-forever/ruGPT-3.5-13B). 📌 Important: This model was trained using settings identical to [GigaSaiga](https://huggingface.co/IlyaGusev/gigasaiga_lora), but incorporates additional dataset. 🔗 Training code is [here](https://github.com/EvilFreelancer/ruGPT-3.5-13B-lora). ## Code sample ```python from llm_rs import AutoModel, GenerationConfig as GConfig from transformers import AutoTokenizer, GenerationConfig MODEL_NAME = "evilfreelancer/ruGPT-3.5-13B-ggml" DEFAULT_MESSAGE_TEMPLATE = "<s>{role}\n{content}</s>\n" DEFAULT_SYSTEM_PROMPT = "Ты — ruGPT-3.5, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им." class Conversation: def __init__( self, message_template=DEFAULT_MESSAGE_TEMPLATE, system_prompt=DEFAULT_SYSTEM_PROMPT, start_token_id=2, bot_token_id=46787 ): self.message_template = message_template self.start_token_id = start_token_id self.bot_token_id = bot_token_id self.messages = [{ "role": "system", "content": system_prompt }] def get_start_token_id(self): return self.start_token_id def get_bot_token_id(self): return self.bot_token_id def add_user_message(self, message): self.messages.append({ "role": "user", "content": message }) def add_bot_message(self, message): self.messages.append({ "role": "bot", "content": message }) def get_prompt(self, tokenizer): final_text = "" for message in self.messages: message_text = self.message_template.format(**message) final_text += message_text final_text += tokenizer.decode([self.start_token_id, self.bot_token_id]) return final_text.strip() def generate(model, tokenizer, prompt, generation_config): data = tokenizer(prompt, return_tensors="pt") output = model.generate( prompt=prompt, generation_config=generation_config ).text # print("output", output) output_ids = tokenizer(output, return_tensors="pt")['input_ids'][0] # print("output_ids", output_ids) # output_ids = output_ids[len(data["input_ids"][0]):] # print("output_ids", output_ids) output = tokenizer.decode(output_ids, skip_special_tokens=True) # print("output_ids", output) return output.strip() # Load base model model = AutoModel.from_pretrained( MODEL_NAME, model_file="ruGPT-3.5-13B-lora-q4_0.bin", ) # Init basic tokenizer tokenizer = AutoTokenizer.from_pretrained('ai-forever/ruGPT-3.5-13B', use_fast=False) generation_config = GenerationConfig.from_pretrained('ai-forever/ruGPT-3.5-13B') # Instantiate generator config g_config = GConfig() g_config.top_p = generation_config['top_p'] g_config.top_k = generation_config['top_k'] g_config.repetition_penalty = generation_config['repetition_penalty'] g_config.temperature = generation_config['temperature'] g_config.max_new_tokens = 200 # Start conversation conversation = Conversation() while True: user_message = input("User: ") if user_message.strip() == "/reset": conversation = Conversation() print("History reset completed!") continue if user_message.strip() == "": continue conversation.add_user_message(user_message) prompt = conversation.get_prompt(tokenizer) output = generate( model=model, tokenizer=tokenizer, prompt=prompt, generation_config=g_config ) conversation.add_bot_message(output) print("ruGPT-3.5:", output) print() print("==============================") print() ``` ## Examples ```text User: Привет! Почему трава зелёная? ruGPT-3.5: Трава зелёная благодаря хлорофиллу - пигменту растений, который поглощает световые волны определенной длины и преобразует их в энергию химических связей. Хлорофилл также обеспечивает растения питательными веществами из почвы через фотосинтез. ``` ```text User: Придумай стишок про нейросети. ruGPT-3.5: Нейросеть – это чудо науки, Она способна решать любые задачи. И если ты хочешь стать программистом, То лучше всего изучать нейронные сети. ``` ```text User: Высота самой высокой горы на Земле? ruGPT-3.5: Самая высокая гора на Земле находится в Гималаях и называется Эверест (8848 метров). ``` ```text User: Сколько максимум лет может прожить галопогосская черепаха? ruGPT-3.5: Галопогосские черепахи могут жить до 100 лет и более. Они являются одними из самых долгоживущих животных на планете. ``` ## 📚 Training Datasets The datasets utilized for training this model are consistent with those used for [Saiga-2](https://github.com/IlyaGusev/rulm). Here's the comprehensive list: - [ru_turbo_alpaca](https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca) - [ru_turbo_alpaca_evol_instruct](https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca_evol_instruct) - [ru_turbo_saiga](https://huggingface.co/datasets/IlyaGusev/ru_turbo_saiga) - [ru_sharegpt_cleaned](https://huggingface.co/datasets/IlyaGusev/ru_sharegpt_cleaned) - [oasst1_ru_main_branch](https://huggingface.co/datasets/IlyaGusev/oasst1_ru_main_branch) - [gpt_roleplay_realm](https://huggingface.co/datasets/IlyaGusev/gpt_roleplay_realm) - [ru_instruct_gpt4](https://huggingface.co/datasets/lksy/ru_instruct_gpt4) ## 🛠 Training Procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ## ⚙️ Framework Versions Ensure you have the following framework versions for compatibility: - PyTorch 2.1.0 - PEFT 0.5.0 - bitsandbytes 0.41.1 - transformers 4.34.0 ## Links - https://t.me/evilfreelancer - https://dzen.ru/evilfreelancer
AsciiP/DeepNegative
AsciiP
"2023-04-16T08:28:05Z"
0
5
null
[ "text-to-image", "en", "zh", "region:us" ]
text-to-image
"2023-04-16T07:10:34Z"
--- language: - en - zh pipeline_tag: text-to-image --- **I am not the author of these models, I just uploaded them to Huggingface for ease of use in Colab.** **我不是这些模型的作者,我只是为了在colab里使用方便而上传到huggingface。** https://civitai.com/models/4629/deep-negative-v1x
distily/distily_projector_experiment
distily
"2024-08-22T15:40:01Z"
9
0
Distily
[ "Distily", "tensorboard", "safetensors", "gpt2", "bitnet", "1.58b", "generated_from_trainer", "dataset:wikimedia/wikipedia", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "region:us" ]
null
"2024-08-21T19:55:52Z"
--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: mit tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_projector_experiment results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. # Model description More information needed # Intended uses & limitations More information needed --> # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.24 GB # Evaluation Metrics Comparison | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 | | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 30.7740 | 30.0939 | 83.073 | 10.401 | 4060086272.0 | 71468255805440.0 | | 2500 | 0.0404 | 1192.0 | 11840.0 | 9.8250 | 30.1508 | 82.916 | 10.381 | 772.0 | 15040.0 | | 5000 | 0.0808 | 412.0 | 2240.0 | 8.3978 | 30.1808 | 82.834 | 10.371 | 290.0 | 438.0 | | 7500 | 0.1212 | 245.0 | 908.0 | 7.6620 | 30.1603 | 82.891 | 10.378 | 219.0 | 198.0 | | 10000 | 0.1616 | 182.0 | 672.0 | 7.2415 | 30.2587 | 82.621 | 10.344 | 165.0 | 204.0 | | 12500 | 0.2020 | 132.0 | 504.0 | 6.6895 | 30.1682 | 82.869 | 10.375 | 115.0 | 155.0 | | 15000 | 0.2424 | 113.0 | 436.0 | 6.4127 | 30.186 | 82.82 | 10.369 | 89.5 | 137.0 | | 17500 | 0.2828 | 92.5 | 340.0 | 6.1945 | 30.108 | 83.035 | 10.396 | 71.0 | 132.0 | | 20000 | 0.3232 | 74.0 | 278.0 | 5.9293 | 30.1455 | 82.931 | 10.383 | 63.25 | 134.0 | | 22500 | 0.3636 | 66.0 | 215.0 | 5.6606 | 30.0869 | 83.093 | 10.403 | 50.5 | 81.5 | | 25000 | 0.4040 | 63.25 | 189.0 | 5.5592 | 30.1385 | 82.95 | 10.385 | 44.0 | 72.5 | | 27500 | 0.4444 | 59.0 | 202.0 | 5.4963 | 30.1334 | 82.964 | 10.387 | 40.5 | 79.0 | | 30000 | 0.4848 | 59.75 | 198.0 | 5.4789 | 30.1924 | 82.802 | 10.367 | 42.25 | 63.75 | | 32500 | 0.5253 | 58.75 | 177.0 | 5.4552 | 30.1133 | 83.02 | 10.394 | 40.25 | 56.5 | | 35000 | 0.5657 | 57.5 | 167.0 | 5.3773 | 30.1179 | 83.007 | 10.393 | 36.0 | 51.0 | | 37500 | 0.6061 | 57.5 | 161.0 | 5.3443 | 30.1249 | 82.988 | 10.39 | 37.75 | 53.25 | | 40000 | 0.6465 | 54.5 | 159.0 | 5.3258 | 30.1211 | 82.998 | 10.391 | 34.25 | 59.0 | | 42500 | 0.6869 | 55.25 | 150.0 | 5.2937 | 30.1886 | 82.813 | 10.368 | 35.75 | 50.75 | | 45000 | 0.7273 | 50.5 | 132.0 | 5.1564 | 30.1176 | 83.008 | 10.393 | 30.125 | 42.75 | | 47500 | 0.7677 | 50.75 | 123.0 | 5.1254 | 30.0774 | 83.119 | 10.406 | 29.375 | 37.5 | | 50000 | 0.8081 | 50.0 | 123.5 | 5.1100 | 30.1068 | 83.038 | 10.396 | 28.75 | 39.0 | | 52500 | 0.8485 | 49.0 | 120.0 | 5.0958 | 30.1022 | 83.05 | 10.398 | 29.125 | 35.0 | | 55000 | 0.8889 | 48.75 | 117.5 | 5.0753 | 30.968 | 80.728 | 10.107 | 28.125 | 35.75 | | 57500 | 0.9293 | 48.25 | 117.0 | 5.0696 | 30.0872 | 83.092 | 10.403 | 28.0 | 33.25 | | 60000 | 0.9697 | 48.25 | 117.0 | 5.0655 | 30.1265 | 82.983 | 10.39 | 28.0 | 33.0 | | 61875 | 1.0 | 48.25 | 117.0 | 5.0651 | 30.1098 | 83.03 | 10.395 | 28.0 | 33.25 | # Resource Usage Comparison - VRAM Use: 7.7843 GB `# Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 124,439,808 - **Data Type (dtype)**: 124439808 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.24 GB <details> <summary>Module Diff Details</summary> ```diff ``` </details> <br/> # Train Dataset Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `247,500` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=cos, layer_mapper=layer-2)) ``` # Hyperparameters The following hyperparameters were used during training: <details> <summary>Expand</summary> - learning_rate: `0.0001` - train_batch_size: `4` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `linear` - lr_scheduler_warmup_ratio: `0.5` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=cos, layer_mapper=layer-2))` - train_embeddings: `True` - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f010c102dd0>` - student_model_name_or_path: `None` - student_config_name_or_path: `None` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `True` - student_model_compile: `False` - dropout: `None` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - teacher_model_compile: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `250000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.5` - warmup_steps: `0` - gradient_checkpointing: `True` </details> <br/> # Framework Versions - Distily 0.2.0 - Transformers 4.44.0 - Pytorch 2.3.0 - Datasets 2.21.0
robiulawaldev/757532ab-7d8a-47e0-bb98-e9ee33bc69a3
robiulawaldev
"2025-02-03T13:07:31Z"
9
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:adapter:NousResearch/Hermes-2-Pro-Mistral-7B", "license:apache-2.0", "region:us" ]
null
"2025-02-03T12:11:59Z"
--- library_name: peft license: apache-2.0 base_model: NousResearch/Hermes-2-Pro-Mistral-7B tags: - axolotl - generated_from_trainer model-index: - name: 757532ab-7d8a-47e0-bb98-e9ee33bc69a3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Hermes-2-Pro-Mistral-7B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - f1b8653716a804b0_train_data.json ds_type: json format: custom path: /workspace/input_data/f1b8653716a804b0_train_data.json type: field_input: chunk field_instruction: title field_output: summary format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: false hub_model_id: robiulawaldev/757532ab-7d8a-47e0-bb98-e9ee33bc69a3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: constant max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/f1b8653716a804b0_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2eba181b-7a7f-4d93-b07b-38656f37293e wandb_project: Birthday-SN56-35-Gradients-On-Demand wandb_run: your_name wandb_runid: 2eba181b-7a7f-4d93-b07b-38656f37293e warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 757532ab-7d8a-47e0-bb98-e9ee33bc69a3 This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | nan | | 0.0 | 0.0009 | 50 | nan | | 0.0 | 0.0017 | 100 | nan | | 0.316 | 0.0026 | 150 | nan | | 0.7063 | 0.0035 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_part1_lr5e5_bs256
RefalMachine
"2024-07-07T16:40:54Z"
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_mean_init_03_07_24", "base_model:finetune:RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_mean_init_03_07_24", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-07T16:30:31Z"
--- base_model: RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_mean_init_03_07_24 tags: - generated_from_trainer metrics: - accuracy model-index: - name: llama3_extended_darulm_20_05_24_part1-2_64000_bpe_part1_lr5e5_bs256 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3_extended_darulm_20_05_24_part1-2_64000_bpe_part1_lr5e5_bs256 This model is a fine-tuned version of [RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_mean_init_03_07_24](https://huggingface.co/RefalMachine/llama3_extended_darulm_20_05_24_part1-2_64000_bpe_mean_init_03_07_24) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3410 - Accuracy: 0.5298 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.6417 | 0.09 | 2000 | 2.4545 | 0.5161 | | 2.5654 | 0.18 | 4000 | 2.3898 | 0.5234 | | 2.5516 | 0.28 | 6000 | 2.3651 | 0.5265 | | 2.5403 | 0.37 | 8000 | 2.3529 | 0.5282 | | 2.5375 | 0.46 | 10000 | 2.3471 | 0.5288 | | 2.517 | 0.55 | 12000 | 2.3434 | 0.5294 | | 2.5153 | 0.64 | 14000 | 2.3418 | 0.5296 | | 2.5203 | 0.73 | 16000 | 2.3413 | 0.5297 | | 2.5053 | 0.83 | 18000 | 2.3410 | 0.5297 | | 2.5107 | 0.92 | 20000 | 2.3410 | 0.5298 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.3.0a0+6ddf5cf85e.nv24.04 - Datasets 2.18.0 - Tokenizers 0.15.2
Shalazary/ruBert-base-sberquad-0.01-len_4-filtered-negative
Shalazary
"2024-04-16T15:51:27Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "base_model:adapter:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
"2024-04-16T15:51:18Z"
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: ai-forever/ruBert-base model-index: - name: ruBert-base-sberquad-0.01-len_4-filtered-negative results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.01-len_4-filtered-negative This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5000 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF
mradermacher
"2024-12-24T13:26:47Z"
27
0
transformers
[ "transformers", "gguf", "trl", "sft", "generated_from_trainer", "en", "dataset:generator", "base_model:yzhuang/Meta-Llama-3-8B-Instruct_fictional_Korean_v1", "base_model:quantized:yzhuang/Meta-Llama-3-8B-Instruct_fictional_Korean_v1", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-24T13:07:50Z"
--- base_model: yzhuang/Meta-Llama-3-8B-Instruct_fictional_Korean_v1 datasets: - generator language: - en library_name: transformers license: other quantized_by: mradermacher tags: - trl - sft - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/yzhuang/Meta-Llama-3-8B-Instruct_fictional_Korean_v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-8B-Instruct_fictional_Korean_v1-GGUF/resolve/main/Meta-Llama-3-8B-Instruct_fictional_Korean_v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
great0001/681d9053-7a28-44fc-a4ed-e3fe514e4818
great0001
"2025-02-05T17:56:45Z"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-0.5B", "base_model:adapter:unsloth/Qwen2.5-0.5B", "license:apache-2.0", "region:us" ]
null
"2025-02-05T17:46:16Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 681d9053-7a28-44fc-a4ed-e3fe514e4818 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) # 681d9053-7a28-44fc-a4ed-e3fe514e4818 This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Eomts/Mistral7b_test
Eomts
"2024-05-30T05:54:34Z"
1
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
"2024-05-29T08:41:25Z"
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset. ## Model description This model is for personal study.</br> I based on Ali Mobarekati's script, "Fine-Tuning Mistral 7b in Google Colab with QLoRA (complete guide)"</br> Here's url : https://medium.com/@codersama/fine-tuning-mistral-7b-in-google-colab-with-qlora-complete-guide-60e12d437cca</br> ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.36.2 - Pytorch 2.3.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.2
shaminishara2000/bart-large-ep1-2.357_model
shaminishara2000
"2025-01-13T06:19:46Z"
16
0
transformers
[ "transformers", "safetensors", "led", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-01-13T06:17:58Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MahmoodLab/hest-tissue-seg
MahmoodLab
"2025-03-17T18:47:23Z"
0
4
null
[ "arxiv:2406.16192", "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2024-07-29T19:36:15Z"
--- license: cc-by-nc-sa-4.0 --- # Model Card for hest-tissue-seg DeepLabV3 model finetuned on HEST-1k and Acrobat for IHC/H&E tissue segmentation. Please visit the [HEST-library](https://github.com/mahmoodlab/hest) for instructions. ### How to cite: ``` @article{jaume2024hest, author = {Jaume, Guillaume and Doucet, Paul and Song, Andrew H. and Lu, Ming Y. and Almagro-Perez, Cristina and Wagner, Sophia J. and Vaidya, Anurag J. and Chen, Richard J. and Williamson, Drew F. K. and Kim, Ahrong and Mahmood, Faisal}, title = {{HEST-1k: A Dataset for Spatial Transcriptomics and Histology Image Analysis}}, journal = {arXiv}, year = {2024}, month = jun, eprint = {2406.16192}, url = {https://arxiv.org/abs/2406.16192v1} } ``` ### Contact: - <b>Guillaume Jaume</b> Harvard Medical School, Boston, Mahmood Lab (`[email protected]`) - <b>Paul Doucet</b> Harvard Medical School, Boston, Mahmood Lab (`[email protected]`) <i>The dataset is distributed under the Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0 Deed)</i>
hidude562/Wiki-Complexity
hidude562
"2023-03-23T13:52:40Z"
27
4
transformers
[ "transformers", "pytorch", "jax", "safetensors", "distilbert", "text-classification", "autotrain", "en", "dataset:hidude562/autotrain-data-SimpleDetect", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-05-07T19:37:14Z"
--- tags: autotrain language: en widget: - text: "I quite enjoy using AutoTrain due to its simplicity." datasets: - hidude562/autotrain-data-SimpleDetect co2_eq_emissions: 0.21691606119445225 --- # Model Description This model detects if you are writing in a format that is more similar to Simple English Wikipedia or English Wikipedia. This can be extended to applications that aren't Wikipedia as well and to some extent, it can be used for other languages. Please also note there is a major bias to special characters (Mainly the hyphen mark, but it also applies to others) so I would recommend removing them from your input text. # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 837726721 - CO2 Emissions (in grams): 0.21691606119445225 ## Validation Metrics - Loss: 0.010096958838403225 - Accuracy: 0.996223414828066 - Macro F1: 0.996179398826373 - Micro F1: 0.996223414828066 - Weighted F1: 0.996223414828066 - Macro Precision: 0.996179398826373 - Micro Precision: 0.996223414828066 - Weighted Precision: 0.996223414828066 - Macro Recall: 0.996179398826373 - Micro Recall: 0.996223414828066 - Weighted Recall: 0.996223414828066 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I quite enjoy using AutoTrain due to its simplicity."}' https://api-inference.huggingface.co/models/hidude562/Wiki-Complexity ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("hidude562/Wiki-Complexity", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("hidude562/Wiki-Complexity", use_auth_token=True) inputs = tokenizer("I quite enjoy using AutoTrain due to its simplicity.", return_tensors="pt") outputs = model(**inputs) ```
lesso09/6a9d2015-79a6-4ab9-a5b1-63cc304bf759
lesso09
"2025-03-03T12:01:16Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B", "base_model:adapter:unsloth/Llama-3.2-3B", "license:llama3.2", "region:us" ]
null
"2025-03-02T02:56:20Z"
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B tags: - axolotl - generated_from_trainer model-index: - name: 6a9d2015-79a6-4ab9-a5b1-63cc304bf759 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <br> # 6a9d2015-79a6-4ab9-a5b1-63cc304bf759 This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000209 - train_batch_size: 4 - eval_batch_size: 4 - seed: 90 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 4.2641 | | 2.5688 | 0.0045 | 50 | 2.6487 | | 2.3644 | 0.0090 | 100 | 2.5669 | | 2.2264 | 0.0135 | 150 | 2.6296 | | 2.2976 | 0.0180 | 200 | 2.5542 | | 2.3546 | 0.0225 | 250 | 2.4588 | | 2.1681 | 0.0270 | 300 | 2.4442 | | 2.2295 | 0.0316 | 350 | 2.3897 | | 2.1956 | 0.0361 | 400 | 2.3814 | | 2.192 | 0.0406 | 450 | 2.3718 | | 2.2803 | 0.0451 | 500 | 2.3701 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso06/61d5e828-15bc-416f-9435-b415ed89d602
lesso06
"2025-02-24T02:15:48Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Hermes-3-Llama-3.1-8B", "base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B", "region:us" ]
null
"2025-02-24T01:16:17Z"
--- library_name: peft base_model: unsloth/Hermes-3-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: 61d5e828-15bc-416f-9435-b415ed89d602 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora auto_find_batch_size: true base_model: unsloth/Hermes-3-Llama-3.1-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 6efa2767c16f7971_train_data.json ds_type: json format: custom path: /workspace/input_data/6efa2767c16f7971_train_data.json type: field_instruction: sentence1 field_output: sentence2 format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 50 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: true hub_model_id: lesso06/61d5e828-15bc-416f-9435-b415ed89d602 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000206 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/6efa2767c16f7971_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 60 sequence_len: 512 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e0fef767-569b-4ecd-964a-5fa510437ddc wandb_project: 06a wandb_run: your_name wandb_runid: e0fef767-569b-4ecd-964a-5fa510437ddc warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 61d5e828-15bc-416f-9435-b415ed89d602 This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7813 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000206 - train_batch_size: 4 - eval_batch_size: 4 - seed: 60 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 4.1377 | | 1.0226 | 0.0048 | 50 | 0.9225 | | 0.7642 | 0.0095 | 100 | 0.8985 | | 0.8304 | 0.0143 | 150 | 0.9036 | | 0.6949 | 0.0190 | 200 | 0.9038 | | 0.901 | 0.0238 | 250 | 0.8433 | | 0.7746 | 0.0285 | 300 | 0.8179 | | 0.8547 | 0.0333 | 350 | 0.7996 | | 0.8232 | 0.0380 | 400 | 0.7912 | | 0.8276 | 0.0428 | 450 | 0.7830 | | 0.843 | 0.0475 | 500 | 0.7813 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
research-backup/xlm-roberta-large-trimmed-de-45000
research-backup
"2023-03-05T18:29:40Z"
107
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-03-05T18:13:35Z"
# Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-de-45000` This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-de-45000 | |:---------------------------|:--------------------|:--------------------------------------------------| | parameter_size_full | 560,142,482 | 350,017,482 | | parameter_size_embedding | 256,002,048 | 46,082,048 | | vocab_size | 250,002 | 45,002 | | compression_rate_full | 100.0 | 62.49 | | compression_rate_embedding | 100.0 | 18.0 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | de | vocabtrimmer/mc4_validation | text | de | validation | 45000 | 2 |
ashirbadmeher/finetuning-sentiment-model-3000-samples_1
ashirbadmeher
"2025-03-07T11:56:02Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-07T11:38:34Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuning-sentiment-model-3000-samples_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples_1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5340 - Accuracy: 0.8667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
maanasharma5/dialect-debiasing-gpt2-medium-translated-pnlogmse-e1-r10_eval-n1.0-smaller_lora
maanasharma5
"2025-03-29T08:00:16Z"
0
0
peft
[ "peft", "safetensors", "gpt2", "arxiv:1910.09700", "base_model:openai-community/gpt2-medium", "base_model:adapter:openai-community/gpt2-medium", "region:us" ]
null
"2025-03-29T08:00:14Z"
--- base_model: gpt2-medium library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
Muhammad89/Naruto_anime_test
Muhammad89
"2023-08-15T09:02:24Z"
2
0
diffusers
[ "diffusers", "art", "en", "license:apache-2.0", "region:us" ]
null
"2023-08-15T08:52:27Z"
--- license: apache-2.0 language: - en library_name: diffusers tags: - art ---
zotax/agataboro-flux-model
zotax
"2025-02-07T21:40:19Z"
82
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-02-07T21:17:53Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: agataboro --- # Agataboro Flux Model <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `agataboro` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('zotax/agataboro-flux-model', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
blockblockblock/Hermes-2-Pro-Mistral-7B-bpw6
blockblockblock
"2024-03-19T05:35:07Z"
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Mistral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:mistralai/Mistral-7B-v0.1", "base_model:quantized:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
"2024-03-19T05:32:48Z"
--- base_model: mistralai/Mistral-7B-v0.1 tags: - Mistral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode model-index: - name: Hermes-2-Pro-Mistral-7B results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. --- # Hermes 2 Pro - Mistral 7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Thank you to Latitude for sponsoring compute for this model! ## Example Outputs ### Explaining Problems with Quantum Gravity: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/y_hPafyyvPb32efC5N4Es.png) ### Roleplaying as a Cosmic Super Intelligence: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/m6d6Saf7M6Luu9QnXYYAP.png) ### Detailing the Theory of AI Consciousness in JSON ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/MOLybxs7_dLjVys54imO3.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5461|± |0.0145| | | |acc_norm|0.5623|± |0.0145| |arc_easy | 0|acc |0.8157|± |0.0080| | | |acc_norm|0.7934|± |0.0083| |boolq | 1|acc |0.8688|± |0.0059| |hellaswag | 0|acc |0.6272|± |0.0048| | | |acc_norm|0.8057|± |0.0039| |openbookqa | 0|acc |0.3360|± |0.0211| | | |acc_norm|0.4300|± |0.0222| |piqa | 0|acc |0.7954|± |0.0094| | | |acc_norm|0.7998|± |0.0093| |winogrande | 0|acc |0.7230|± |0.0126| ``` Average: 71.19 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2047|± |0.0254| | | |acc_norm|0.2283|± |0.0264| |agieval_logiqa_en | 0|acc |0.3779|± |0.0190| | | |acc_norm|0.3932|± |0.0192| |agieval_lsat_ar | 0|acc |0.2652|± |0.0292| | | |acc_norm|0.2522|± |0.0287| |agieval_lsat_lr | 0|acc |0.5216|± |0.0221| | | |acc_norm|0.5137|± |0.0222| |agieval_lsat_rc | 0|acc |0.5911|± |0.0300| | | |acc_norm|0.5836|± |0.0301| |agieval_sat_en | 0|acc |0.7427|± |0.0305| | | |acc_norm|0.7184|± |0.0314| |agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348| | | |acc_norm|0.4466|± |0.0347| |agieval_sat_math | 0|acc |0.3818|± |0.0328| | | |acc_norm|0.3545|± |0.0323| ``` Average: 44.52 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361| |bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214| | | |exact_str_match |0.2256|± |0.0221| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142| |bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289| ``` Average: 41.65 ## TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4100|± |0.0172| | | |mc2 |0.5911|± |0.0158| ``` # Function Calling Evaluations We worked with Fireworks.AI on evaluations by starting off with their Function Calling eval dataset, fixing some unsolveable ones, and generating a second eval dataset for JSON mode. ## Function Calling Accuracy: 91% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/XF3Zii4-QhE2yjWwHr_v4.png) ## JSON Mode Accuracy: 84% ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/8H2iyjh5wyP2FtLq2LCed.png) Run the evaluator yourself using @interstellarninja's codebase here: https://github.com/interstellarninja/function-calling-eval You can find the evaluation datasets here: https://huggingface.co/datasets/NousResearch/func-calling-eval https://huggingface.co/datasets/NousResearch/json-mode-eval # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MistralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Mistral-7B', trust_remote_code=True) model = MistralForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Mistral-7B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Mistral-7B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B]https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)}, title={Hermes-2-Pro-Mistral-7B}, author={"interstellarninja", "Teknium", "theemozilla", "karan4d", "huemin_art"} } ```
neofung/m3e-ernie-xbase-zh
neofung
"2024-03-08T07:23:02Z"
0
0
sentence-transformers
[ "sentence-transformers", "sentence-similarity", "mteb", "zh", "en", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-03-04T06:11:16Z"
--- language: - zh - en tags: - sentence-transformers - sentence-similarity - mteb model-index: - name: zh results: - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 36.28363608508365 - type: cos_sim_spearman value: 37.39698005114737 - type: euclidean_pearson value: 36.407377294778186 - type: euclidean_spearman value: 37.396959945459166 - type: manhattan_pearson value: 36.30818480805082 - type: manhattan_spearman value: 37.28435580456356 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 39.918566602029536 - type: cos_sim_spearman value: 42.163555979292155 - type: euclidean_pearson value: 43.24429263158407 - type: euclidean_spearman value: 42.16355485217486 - type: manhattan_pearson value: 43.23108002349145 - type: manhattan_spearman value: 42.156854810425834 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.788000000000004 - type: f1 value: 44.518439064691925 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 67.03414409142314 - type: cos_sim_spearman value: 70.95560250546684 - type: euclidean_pearson value: 69.35644910492917 - type: euclidean_spearman value: 70.95560250269956 - type: manhattan_pearson value: 69.32201332479197 - type: manhattan_spearman value: 70.92406185691 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 39.31955168227449 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 37.8418274237459 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 80.66118119519746 - type: mrr value: 83.47972222222222 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 79.31430375371524 - type: mrr value: 82.10194444444444 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 16.672 - type: map_at_10 value: 26.273000000000003 - type: map_at_100 value: 28.044999999999998 - type: map_at_1000 value: 28.208 - type: map_at_3 value: 22.989 - type: map_at_5 value: 24.737000000000002 - type: mrr_at_1 value: 26.257 - type: mrr_at_10 value: 34.358 - type: mrr_at_100 value: 35.436 - type: mrr_at_1000 value: 35.513 - type: mrr_at_3 value: 31.954 - type: mrr_at_5 value: 33.234 - type: ndcg_at_1 value: 26.257 - type: ndcg_at_10 value: 32.326 - type: ndcg_at_100 value: 39.959 - type: ndcg_at_1000 value: 43.163000000000004 - type: ndcg_at_3 value: 27.700999999999997 - type: ndcg_at_5 value: 29.514000000000003 - type: precision_at_1 value: 26.257 - type: precision_at_10 value: 7.607 - type: precision_at_100 value: 1.388 - type: precision_at_1000 value: 0.179 - type: precision_at_3 value: 16.162000000000003 - type: precision_at_5 value: 11.933 - type: recall_at_1 value: 16.672 - type: recall_at_10 value: 42.135 - type: recall_at_100 value: 74.417 - type: recall_at_1000 value: 96.417 - type: recall_at_3 value: 28.416999999999998 - type: recall_at_5 value: 33.873999999999995 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 61.11846061334937 - type: cos_sim_ap value: 65.68356716139071 - type: cos_sim_f1 value: 68.15213842637937 - type: cos_sim_precision value: 52.35109717868338 - type: cos_sim_recall value: 97.61515080664017 - type: dot_accuracy value: 61.11846061334937 - type: dot_ap value: 65.68369552204702 - type: dot_f1 value: 68.15213842637937 - type: dot_precision value: 52.35109717868338 - type: dot_recall value: 97.61515080664017 - type: euclidean_accuracy value: 61.11846061334937 - type: euclidean_ap value: 65.68356789608616 - type: euclidean_f1 value: 68.15213842637937 - type: euclidean_precision value: 52.35109717868338 - type: euclidean_recall value: 97.61515080664017 - type: manhattan_accuracy value: 61.17859290438966 - type: manhattan_ap value: 65.68230365595265 - type: manhattan_f1 value: 68.14029363784665 - type: manhattan_precision value: 52.32368783665289 - type: manhattan_recall value: 97.66191255552957 - type: max_accuracy value: 61.17859290438966 - type: max_ap value: 65.68369552204702 - type: max_f1 value: 68.15213842637937 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 51.054 - type: map_at_10 value: 61.926 - type: map_at_100 value: 62.514 - type: map_at_1000 value: 62.529 - type: map_at_3 value: 59.272999999999996 - type: map_at_5 value: 60.943000000000005 - type: mrr_at_1 value: 51.212 - type: mrr_at_10 value: 61.916000000000004 - type: mrr_at_100 value: 62.495999999999995 - type: mrr_at_1000 value: 62.511 - type: mrr_at_3 value: 59.326 - type: mrr_at_5 value: 60.958999999999996 - type: ndcg_at_1 value: 51.212 - type: ndcg_at_10 value: 67.223 - type: ndcg_at_100 value: 69.92699999999999 - type: ndcg_at_1000 value: 70.307 - type: ndcg_at_3 value: 61.873 - type: ndcg_at_5 value: 64.883 - type: precision_at_1 value: 51.212 - type: precision_at_10 value: 8.472 - type: precision_at_100 value: 0.9730000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 23.253 - type: precision_at_5 value: 15.448 - type: recall_at_1 value: 51.054 - type: recall_at_10 value: 83.825 - type: recall_at_100 value: 96.207 - type: recall_at_1000 value: 99.157 - type: recall_at_3 value: 69.31 - type: recall_at_5 value: 76.66 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 21.247 - type: map_at_10 value: 64.793 - type: map_at_100 value: 68.62899999999999 - type: map_at_1000 value: 68.718 - type: map_at_3 value: 44.192 - type: map_at_5 value: 55.435 - type: mrr_at_1 value: 76.7 - type: mrr_at_10 value: 84.22 - type: mrr_at_100 value: 84.341 - type: mrr_at_1000 value: 84.346 - type: mrr_at_3 value: 83.42500000000001 - type: mrr_at_5 value: 83.902 - type: ndcg_at_1 value: 76.7 - type: ndcg_at_10 value: 75.271 - type: ndcg_at_100 value: 80.62 - type: ndcg_at_1000 value: 81.45 - type: ndcg_at_3 value: 72.803 - type: ndcg_at_5 value: 71.694 - type: precision_at_1 value: 76.7 - type: precision_at_10 value: 36.925000000000004 - type: precision_at_100 value: 4.675 - type: precision_at_1000 value: 0.48700000000000004 - type: precision_at_3 value: 65.383 - type: precision_at_5 value: 55.15 - type: recall_at_1 value: 21.247 - type: recall_at_10 value: 78.38300000000001 - type: recall_at_100 value: 94.759 - type: recall_at_1000 value: 98.907 - type: recall_at_3 value: 48.04 - type: recall_at_5 value: 62.883 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 42.0 - type: map_at_10 value: 52.691 - type: map_at_100 value: 53.456 - type: map_at_1000 value: 53.480000000000004 - type: map_at_3 value: 49.583 - type: map_at_5 value: 51.723 - type: mrr_at_1 value: 42.0 - type: mrr_at_10 value: 52.691 - type: mrr_at_100 value: 53.456 - type: mrr_at_1000 value: 53.480000000000004 - type: mrr_at_3 value: 49.583 - type: mrr_at_5 value: 51.723 - type: ndcg_at_1 value: 42.0 - type: ndcg_at_10 value: 58.243 - type: ndcg_at_100 value: 61.907999999999994 - type: ndcg_at_1000 value: 62.483999999999995 - type: ndcg_at_3 value: 52.03 - type: ndcg_at_5 value: 55.85099999999999 - type: precision_at_1 value: 42.0 - type: precision_at_10 value: 7.580000000000001 - type: precision_at_100 value: 0.928 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 19.7 - type: precision_at_5 value: 13.66 - type: recall_at_1 value: 42.0 - type: recall_at_10 value: 75.8 - type: recall_at_100 value: 92.80000000000001 - type: recall_at_1000 value: 97.2 - type: recall_at_3 value: 59.099999999999994 - type: recall_at_5 value: 68.30000000000001 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 44.86340900346287 - type: f1 value: 31.324006049353713 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 88.48030018761726 - type: ap value: 59.392058006606476 - type: f1 value: 83.61333024672861 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 66.36852873686233 - type: cos_sim_spearman value: 73.27371960661353 - type: euclidean_pearson value: 71.38209904858738 - type: euclidean_spearman value: 73.27373512049904 - type: manhattan_pearson value: 71.51557697058817 - type: manhattan_spearman value: 73.38956581066971 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6 metrics: - type: map value: 19.57107231987867 - type: mrr value: 18.224603174603175 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 43.785000000000004 - type: map_at_10 value: 53.278000000000006 - type: map_at_100 value: 53.946000000000005 - type: map_at_1000 value: 53.983000000000004 - type: map_at_3 value: 50.846999999999994 - type: map_at_5 value: 52.286 - type: mrr_at_1 value: 45.559 - type: mrr_at_10 value: 54.129000000000005 - type: mrr_at_100 value: 54.732 - type: mrr_at_1000 value: 54.766999999999996 - type: mrr_at_3 value: 51.885999999999996 - type: mrr_at_5 value: 53.212 - type: ndcg_at_1 value: 45.559 - type: ndcg_at_10 value: 57.909 - type: ndcg_at_100 value: 61.068999999999996 - type: ndcg_at_1000 value: 62.09400000000001 - type: ndcg_at_3 value: 53.125 - type: ndcg_at_5 value: 55.614 - type: precision_at_1 value: 45.559 - type: precision_at_10 value: 7.617 - type: precision_at_100 value: 0.9199999999999999 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 20.707 - type: precision_at_5 value: 13.730999999999998 - type: recall_at_1 value: 43.785000000000004 - type: recall_at_10 value: 71.543 - type: recall_at_100 value: 86.197 - type: recall_at_1000 value: 94.305 - type: recall_at_3 value: 58.677 - type: recall_at_5 value: 64.62599999999999 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.29455279085406 - type: f1 value: 58.42865357114413 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.89979825151312 - type: f1 value: 66.6125514843663 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 44.7 - type: map_at_10 value: 51.307 - type: map_at_100 value: 52.002 - type: map_at_1000 value: 52.06699999999999 - type: map_at_3 value: 49.55 - type: map_at_5 value: 50.544999999999995 - type: mrr_at_1 value: 44.9 - type: mrr_at_10 value: 51.415 - type: mrr_at_100 value: 52.111 - type: mrr_at_1000 value: 52.175000000000004 - type: mrr_at_3 value: 49.683 - type: mrr_at_5 value: 50.653000000000006 - type: ndcg_at_1 value: 44.7 - type: ndcg_at_10 value: 54.778000000000006 - type: ndcg_at_100 value: 58.526 - type: ndcg_at_1000 value: 60.187999999999995 - type: ndcg_at_3 value: 51.129999999999995 - type: ndcg_at_5 value: 52.933 - type: precision_at_1 value: 44.7 - type: precision_at_10 value: 6.58 - type: precision_at_100 value: 0.8420000000000001 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 18.567 - type: precision_at_5 value: 12.02 - type: recall_at_1 value: 44.7 - type: recall_at_10 value: 65.8 - type: recall_at_100 value: 84.2 - type: recall_at_1000 value: 97.2 - type: recall_at_3 value: 55.7 - type: recall_at_5 value: 60.099999999999994 - task: type: Retrieval dataset: type: Shitao/MLDR name: MTEB MultiLongDocRetrieval (zh) config: zh split: test revision: None metrics: - type: map_at_1 value: 7.625 - type: map_at_10 value: 10.238 - type: map_at_100 value: 10.885 - type: map_at_1000 value: 10.958 - type: map_at_3 value: 9.292 - type: map_at_5 value: 9.91 - type: mrr_at_1 value: 7.625 - type: mrr_at_10 value: 10.238 - type: mrr_at_100 value: 10.885 - type: mrr_at_1000 value: 10.958 - type: mrr_at_3 value: 9.292 - type: mrr_at_5 value: 9.91 - type: ndcg_at_1 value: 7.625 - type: ndcg_at_10 value: 11.784 - type: ndcg_at_100 value: 15.133 - type: ndcg_at_1000 value: 17.511 - type: ndcg_at_3 value: 9.857000000000001 - type: ndcg_at_5 value: 10.981 - type: precision_at_1 value: 7.625 - type: precision_at_10 value: 1.675 - type: precision_at_100 value: 0.329 - type: precision_at_1000 value: 0.053 - type: precision_at_3 value: 3.833 - type: precision_at_5 value: 2.85 - type: recall_at_1 value: 7.625 - type: recall_at_10 value: 16.75 - type: recall_at_100 value: 32.875 - type: recall_at_1000 value: 52.625 - type: recall_at_3 value: 11.5 - type: recall_at_5 value: 14.249999999999998 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 78.45666666666666 - type: f1 value: 78.06393644109178 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 59.88088792636708 - type: cos_sim_ap value: 59.993466246406854 - type: cos_sim_f1 value: 69.33333333333334 - type: cos_sim_precision value: 54.23122765196663 - type: cos_sim_recall value: 96.09292502639916 - type: dot_accuracy value: 59.88088792636708 - type: dot_ap value: 59.99351215786742 - type: dot_f1 value: 69.33333333333334 - type: dot_precision value: 54.23122765196663 - type: dot_recall value: 96.09292502639916 - type: euclidean_accuracy value: 59.88088792636708 - type: euclidean_ap value: 59.993466246406854 - type: euclidean_f1 value: 69.33333333333334 - type: euclidean_precision value: 54.23122765196663 - type: euclidean_recall value: 96.09292502639916 - type: manhattan_accuracy value: 59.989171629669734 - type: manhattan_ap value: 60.06745167956717 - type: manhattan_f1 value: 69.37381404174573 - type: manhattan_precision value: 54.14691943127961 - type: manhattan_recall value: 96.51531151003168 - type: max_accuracy value: 59.989171629669734 - type: max_ap value: 60.06745167956717 - type: max_f1 value: 69.37381404174573 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 92.58 - type: ap value: 90.58624365698103 - type: f1 value: 92.56998002261557 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 15.428347645738844 - type: cos_sim_spearman value: 18.54916824520863 - type: euclidean_pearson value: 18.525706701701317 - type: euclidean_spearman value: 18.564855538117524 - type: manhattan_pearson value: 18.54511262151164 - type: manhattan_spearman value: 18.587848451111213 - task: type: PairClassification dataset: type: paws-x name: MTEB PawsX (zh) config: zh split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 60.3 - type: cos_sim_ap value: 57.92869006380703 - type: cos_sim_f1 value: 62.31681786461968 - type: cos_sim_precision value: 45.283975659229206 - type: cos_sim_recall value: 99.88814317673378 - type: dot_accuracy value: 60.3 - type: dot_ap value: 57.7632607916169 - type: dot_f1 value: 62.31681786461968 - type: dot_precision value: 45.283975659229206 - type: dot_recall value: 99.88814317673378 - type: euclidean_accuracy value: 60.3 - type: euclidean_ap value: 57.92869006380703 - type: euclidean_f1 value: 62.31681786461968 - type: euclidean_precision value: 45.283975659229206 - type: euclidean_recall value: 99.88814317673378 - type: manhattan_accuracy value: 60.25 - type: manhattan_ap value: 57.929597845689706 - type: manhattan_f1 value: 62.31681786461968 - type: manhattan_precision value: 45.283975659229206 - type: manhattan_recall value: 99.88814317673378 - type: max_accuracy value: 60.3 - type: max_ap value: 57.929597845689706 - type: max_f1 value: 62.31681786461968 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 28.445664430656038 - type: cos_sim_spearman value: 29.599326690902288 - type: euclidean_pearson value: 27.900455284977017 - type: euclidean_spearman value: 29.599947224705264 - type: manhattan_pearson value: 28.101656918683116 - type: manhattan_spearman value: 29.78083888978687 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 61.13774633735679 - type: cos_sim_spearman value: 65.43749616084263 - type: euclidean_pearson value: 63.42122949030793 - type: euclidean_spearman value: 65.43749616084263 - type: manhattan_pearson value: 63.78466267937151 - type: manhattan_spearman value: 65.4252196465631 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 66.43725663481563 - type: cos_sim_spearman value: 66.91073455354187 - type: euclidean_pearson value: 67.25178758750022 - type: euclidean_spearman value: 66.91129699608939 - type: manhattan_pearson value: 67.33381999971951 - type: manhattan_spearman value: 66.9990458174529 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 64.31327281684898 - type: mrr value: 73.58095291829211 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 20.961 - type: map_at_10 value: 59.065 - type: map_at_100 value: 63.544 - type: map_at_1000 value: 63.681 - type: map_at_3 value: 40.849999999999994 - type: map_at_5 value: 50.268 - type: mrr_at_1 value: 74.934 - type: mrr_at_10 value: 80.571 - type: mrr_at_100 value: 80.814 - type: mrr_at_1000 value: 80.82300000000001 - type: mrr_at_3 value: 79.449 - type: mrr_at_5 value: 80.13 - type: ndcg_at_1 value: 74.934 - type: ndcg_at_10 value: 69.215 - type: ndcg_at_100 value: 75.61099999999999 - type: ndcg_at_1000 value: 77.03999999999999 - type: ndcg_at_3 value: 70.04899999999999 - type: ndcg_at_5 value: 68.50699999999999 - type: precision_at_1 value: 74.934 - type: precision_at_10 value: 35.569 - type: precision_at_100 value: 4.757 - type: precision_at_1000 value: 0.509 - type: precision_at_3 value: 61.802 - type: precision_at_5 value: 51.882 - type: recall_at_1 value: 20.961 - type: recall_at_10 value: 69.626 - type: recall_at_100 value: 89.464 - type: recall_at_1000 value: 96.721 - type: recall_at_3 value: 43.608999999999995 - type: recall_at_5 value: 55.724 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 50.01800000000001 - type: f1 value: 48.262341643251936 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 60.68748256831344 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 56.73298697800912 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 46.9 - type: map_at_10 value: 57.849 - type: map_at_100 value: 58.532 - type: map_at_1000 value: 58.553 - type: map_at_3 value: 55.467 - type: map_at_5 value: 56.92700000000001 - type: mrr_at_1 value: 46.9 - type: mrr_at_10 value: 57.849 - type: mrr_at_100 value: 58.532 - type: mrr_at_1000 value: 58.553 - type: mrr_at_3 value: 55.467 - type: mrr_at_5 value: 56.92700000000001 - type: ndcg_at_1 value: 46.9 - type: ndcg_at_10 value: 63.09 - type: ndcg_at_100 value: 66.43 - type: ndcg_at_1000 value: 66.949 - type: ndcg_at_3 value: 58.226 - type: ndcg_at_5 value: 60.838 - type: precision_at_1 value: 46.9 - type: precision_at_10 value: 7.95 - type: precision_at_100 value: 0.951 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 22.067 - type: precision_at_5 value: 14.499999999999998 - type: recall_at_1 value: 46.9 - type: recall_at_10 value: 79.5 - type: recall_at_100 value: 95.1 - type: recall_at_1000 value: 99.1 - type: recall_at_3 value: 66.2 - type: recall_at_5 value: 72.5 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.09 - type: ap value: 74.68093732384233 - type: f1 value: 87.7768409829789 ---
S0urtamarind/Reinforce-V1
S0urtamarind
"2023-12-22T17:57:44Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-12-22T17:28:35Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-V1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 488.85 +/- 50.62 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
netcat420/MFANN3bv0.8
netcat420
"2024-05-11T03:49:06Z"
11
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "text-classification", "en", "dataset:netcat420/MFANN", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-10T05:11:43Z"
--- library_name: transformers license: apache-2.0 datasets: - netcat420/MFANN language: - en pipeline_tag: text-classification --- MFANN 3b version 0.8 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/DaGRoYdRwb_q5ni-2Wh6R.png) fine-tuned on the MFANN dataset as of 5/10/2024 as it is an ever expanding dataset. 63.5 <- Avg 61.52 <- ARC 76.09 <- HellaSwag 55.58 <- MMLU 54.44 <- TruthfulQA 78.53 <- Winogrande (very high score for a 3b model!) 54.81 <- GSM8K
artisokka/deepseek_lora_medical
artisokka
"2025-03-19T15:01:51Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-03-19T13:57:23Z"
--- base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** artisokka - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf
RichardErkhov
"2025-04-05T10:39:36Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-05T10:13:07Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) potato_wizard_v38 - GGUF - Model creator: https://huggingface.co/ShadrackImai/ - Original model: https://huggingface.co/ShadrackImai/potato_wizard_v38/ | Name | Quant method | Size | | ---- | ---- | ---- | | [potato_wizard_v38.Q2_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q2_K.gguf) | Q2_K | 0.54GB | | [potato_wizard_v38.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ3_XS.gguf) | IQ3_XS | 0.58GB | | [potato_wizard_v38.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ3_S.gguf) | IQ3_S | 0.6GB | | [potato_wizard_v38.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q3_K_S.gguf) | Q3_K_S | 0.6GB | | [potato_wizard_v38.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ3_M.gguf) | IQ3_M | 0.61GB | | [potato_wizard_v38.Q3_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q3_K.gguf) | Q3_K | 0.64GB | | [potato_wizard_v38.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q3_K_M.gguf) | Q3_K_M | 0.64GB | | [potato_wizard_v38.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q3_K_L.gguf) | Q3_K_L | 0.68GB | | [potato_wizard_v38.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ4_XS.gguf) | IQ4_XS | 0.7GB | | [potato_wizard_v38.Q4_0.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_0.gguf) | Q4_0 | 0.72GB | | [potato_wizard_v38.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.IQ4_NL.gguf) | IQ4_NL | 0.72GB | | [potato_wizard_v38.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_K_S.gguf) | Q4_K_S | 0.72GB | | [potato_wizard_v38.Q4_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_K.gguf) | Q4_K | 0.75GB | | [potato_wizard_v38.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_K_M.gguf) | Q4_K_M | 0.75GB | | [potato_wizard_v38.Q4_1.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q4_1.gguf) | Q4_1 | 0.77GB | | [potato_wizard_v38.Q5_0.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_0.gguf) | Q5_0 | 0.83GB | | [potato_wizard_v38.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_K_S.gguf) | Q5_K_S | 0.83GB | | [potato_wizard_v38.Q5_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_K.gguf) | Q5_K | 0.85GB | | [potato_wizard_v38.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_K_M.gguf) | Q5_K_M | 0.85GB | | [potato_wizard_v38.Q5_1.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q5_1.gguf) | Q5_1 | 0.89GB | | [potato_wizard_v38.Q6_K.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q6_K.gguf) | Q6_K | 0.95GB | | [potato_wizard_v38.Q8_0.gguf](https://huggingface.co/RichardErkhov/ShadrackImai_-_potato_wizard_v38-gguf/blob/main/potato_wizard_v38.Q8_0.gguf) | Q8_0 | 1.23GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wangzech/dummy-model
wangzech
"2024-11-18T11:17:03Z"
116
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-11-18T11:15:47Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CLMBR/re-irr-sv-agr-transformer-2
CLMBR
"2024-02-03T01:55:00Z"
1
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-25T08:50:57Z"
--- tags: - generated_from_trainer model-index: - name: re-irr-sv-agr-transformer-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # re-irr-sv-agr-transformer-2 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3052726 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-------:|:---------------:| | 4.2207 | 0.03 | 76320 | 4.2149 | | 4.0118 | 0.03 | 152640 | 4.0444 | | 3.905 | 1.03 | 228960 | 3.9700 | | 3.8377 | 0.03 | 305280 | 3.9284 | | 3.7866 | 1.03 | 381600 | 3.9029 | | 3.7487 | 0.03 | 457920 | 3.8875 | | 3.7143 | 1.03 | 534240 | 3.8776 | | 3.6817 | 0.03 | 610560 | 3.8704 | | 3.6543 | 1.03 | 686880 | 3.8656 | | 3.6302 | 0.03 | 763200 | 3.8644 | | 3.6039 | 1.03 | 839520 | 3.8619 | | 3.5885 | 0.03 | 915840 | 3.8621 | | 3.5743 | 1.03 | 992160 | 3.8626 | | 3.5508 | 0.03 | 1068480 | 3.8623 | | 3.5341 | 0.03 | 1144800 | 3.8637 | | 3.5187 | 1.03 | 1221120 | 3.8642 | | 3.5007 | 0.03 | 1297440 | 3.8653 | | 3.4857 | 1.03 | 1373760 | 3.8663 | | 3.4733 | 0.03 | 1450080 | 3.8689 | | 3.4644 | 1.03 | 1526400 | 3.8697 | | 3.4563 | 0.03 | 1602720 | 3.8720 | | 3.4494 | 1.03 | 1679040 | 3.8730 | | 3.4406 | 0.03 | 1755360 | 3.8751 | | 3.4292 | 1.03 | 1831680 | 3.8765 | | 3.417 | 0.03 | 1908000 | 3.8777 | | 3.4052 | 0.03 | 1984320 | 3.8789 | | 3.3902 | 1.03 | 2060640 | 3.8804 | | 3.3852 | 0.03 | 2136960 | 3.8814 | | 3.3769 | 1.03 | 2213280 | 3.8831 | | 3.3628 | 0.03 | 2289600 | 3.8843 | | 3.3514 | 1.03 | 2365920 | 3.8848 | | 3.3403 | 0.03 | 2442240 | 3.8864 | | 3.3273 | 1.03 | 2518560 | 3.8858 | | 3.3181 | 0.03 | 2594880 | 3.8867 | | 3.3105 | 0.03 | 2671200 | 3.8869 | | 3.3043 | 0.03 | 2747520 | 3.8869 | | 3.2981 | 1.03 | 2823840 | 3.8864 | | 3.2953 | 0.03 | 2900160 | 3.8864 | | 3.2897 | 1.03 | 2976480 | 3.8851 | | 3.2825 | 0.02 | 3052726 | 3.8837 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
CMU-AIR2/math-llama_3_instruct-model-arith-6k
CMU-AIR2
"2024-05-28T14:15:37Z"
1
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
null
"2024-05-27T22:05:14Z"
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B-Instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
jdeklerk10/OCI-DS-6.7B-schema_2
jdeklerk10
"2024-04-13T20:52:22Z"
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:m-a-p/OpenCodeInterpreter-DS-6.7B", "base_model:adapter:m-a-p/OpenCodeInterpreter-DS-6.7B", "license:apache-2.0", "region:us" ]
null
"2024-04-13T19:31:58Z"
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: m-a-p/OpenCodeInterpreter-DS-6.7B model-index: - name: OCI-DS-6.7B-schema_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OCI-DS-6.7B-schema_2 This model is a fine-tuned version of [m-a-p/OpenCodeInterpreter-DS-6.7B](https://huggingface.co/m-a-p/OpenCodeInterpreter-DS-6.7B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7374 | 0.19 | 50 | 0.0000 | | 0.4543 | 0.38 | 100 | 0.0000 | | 5.0784 | 0.57 | 150 | 0.0000 | | 0.0 | 0.76 | 200 | 0.0000 | | 22.4999 | 0.95 | 250 | 0.0000 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Feluda/Zephyr-7b-QnA
Feluda
"2024-01-30T13:34:03Z"
15
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "PEFT", "sft", "TensorBoard ", "Safetensors", " trl", "generated_from_trainer 4-bit", " precision", "question-answering", "en", "dataset:yahma/alpaca-cleaned", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
question-answering
"2024-01-28T16:35:47Z"
--- library_name: transformers tags: - PEFT - mistral - sft - 'TensorBoard ' - Safetensors - ' trl' - generated_from_trainer 4-bit - ' precision' license: mit datasets: - yahma/alpaca-cleaned language: - en pipeline_tag: question-answering --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This Model is Finetuned for Document Question and Answering purpose Trained on the yahma/alpaca-cleaned(https://huggingface.co/TheBloke/zephyr-7B-beta-GPTQ) dataset. ## Model Details ### Training hyperparameters The following hyperparameters were used during training: -gradient_accumulation_steps=1, -warmup_steps=5, -max_steps=20, -learning_rate=2e-4, -fp16=not torch.cuda.is_bf16_supported(), -bf16=torch.cuda.is_bf16_supported(), -logging_steps=1, -optim="adamw_8bit", -weight_decay=0.01, -lr_scheduler_type="linear", -seed=3407, - ### Framework versions - PEFT 0.7.1 - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.16.1 - Tokenizers 0.15.0
juice0630/thanos_5e-4_1000_only_text_encoder
juice0630
"2025-03-24T10:28:16Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2025-03-24T10:21:02Z"
--- base_model: stable-diffusion-v1-5/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m inference: true instance_prompt: a photo of sks thanos tags: - text-to-image - diffusers - lora - diffusers-training - stable-diffusion - stable-diffusion-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - juice0630/thanos_5e-4_1000_only_text_encoder These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of sks thanos using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
CarelS/distilbert-base-uncased-finetuned-squad
CarelS
"2024-06-10T08:35:34Z"
124
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2024-06-10T08:30:14Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
kostiantynk1205/7383fa3d-96c6-4cc5-b793-5edc11f076a6
kostiantynk1205
"2025-01-25T15:37:51Z"
5
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-410m-deduped", "base_model:adapter:EleutherAI/pythia-410m-deduped", "license:apache-2.0", "region:us" ]
null
"2025-01-25T15:36:47Z"
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-410m-deduped tags: - axolotl - generated_from_trainer model-index: - name: 7383fa3d-96c6-4cc5-b793-5edc11f076a6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-410m-deduped bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 10db1ccfab229c76_train_data.json ds_type: json format: custom path: /workspace/input_data/10db1ccfab229c76_train_data.json type: field_input: system field_instruction: prompt field_output: accepted format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk1205/7383fa3d-96c6-4cc5-b793-5edc11f076a6 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/10db1ccfab229c76_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: bd6dd587-5480-4a03-99d6-0fc05c04b60b wandb_project: Birthday-SN56-23-Gradients-On-Demand wandb_run: your_name wandb_runid: bd6dd587-5480-4a03-99d6-0fc05c04b60b warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 7383fa3d-96c6-4cc5-b793-5edc11f076a6 This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.7677 | 0.0005 | 1 | 1.7233 | | 6.3092 | 0.0016 | 3 | 1.7237 | | 7.0638 | 0.0031 | 6 | 1.7149 | | 6.5794 | 0.0047 | 9 | 1.6873 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lun-4/dril-instruct
lun-4
"2023-06-06T02:33:51Z"
0
0
null
[ "arxiv:1910.09700", "license:wtfpl", "region:us" ]
null
"2023-06-05T23:20:27Z"
--- license: wtfpl --- # dril-instruct putting [dril](https://twitter.com/dril) in a jar for future generations. ![picture of dril in a jar](https://github.com/lun-4/dril-instruct/blob/mistress/files/driljar.jpg?raw=true) ## Model Details ### Model Description a LoRA finetune on top of [vicuna-13b-cocktail](https://huggingface.co/reeducator/vicuna-13b-cocktail), made to act like dril's tweets when given an instruction to "Create a joke about X", where X is anything. - **Developed by:** [lun-4](https://l4.pm), [dither](https://github.com/dithercat) - **Model type:** LoRA - **Language(s) (NLP):** English - **License:** WTFPL - **Finetuned from model:** [vicuna-13b-cocktail](https://huggingface.co/reeducator/vicuna-13b-cocktail) ### Model Sources [optional] - **Repository:** https://github.com/lun-4/dril-instruct - **Blogpost:** https://l4.pm/wiki/Personal%20Wiki/AI%20stuff/dril-instruct.html - **Demo**: Nope ## Uses by shitposters, for shitposting, this isn't useful for any other purpose ### Out-of-Scope Use literally anything other than shitposting ## Bias, Risks, and Limitations this model was finetuned on cocktail, which was made from [vicuna](https://lmsys.org/blog/2023-03-30-vicuna/) but without the "ethical guardrails" AKA "As an AI language model, I can't" responses ## How to Get Started with the Model 1. get [text-generation-webui](https://github.com/oobabooga/text-generation-webui) 2. get the base model, [vicuna-13b-cocktail](https://huggingface.co/reeducator/vicuna-13b-cocktail) in the models folder 3. put this in the loras folder 4. there is a lot of hacking that i had to do to make loras work with GPTQ quantized models on my machine. those hacks are not portable 5. use the "Create a joke about X" model template. ## Training Details ### Training Data around 3K dril tweets (that's what snscrape could get, even though there's 12K reported by twitter), and some 10 or so hand-made instructions to the dril tweets ### Training Procedure see blogpost #### Preprocessing [optional] see blogpost pls #### Training Hyperparameters - **Training regime:** int8 ## Evaluation see blogpost pls ## Environmental Impact fuck if i know <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** NVidia A100 80GB - **Hours used:** 1, not including 4 hours of trying to bang a training script together - **Cloud Provider:** [RunPod](https://runpod.io) - **Compute Region:** EU-Norway - **Carbon Emitted:** 0.1 kg CO2 eq. (0.47 if you include the "banging rocks together" step)
gsotnikov/ppo-LunarLander-v2
gsotnikov
"2023-05-24T21:22:20Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-05-24T21:21:58Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 216.37 +/- 55.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
lesso13/f77c4e4c-9f90-4b42-ba25-abe5276e3f78
lesso13
"2025-03-17T01:55:22Z"
0
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
"2025-03-17T01:38:25Z"
--- library_name: peft license: other base_model: facebook/opt-350m tags: - axolotl - generated_from_trainer model-index: - name: f77c4e4c-9f90-4b42-ba25-abe5276e3f78 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-350m bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 2ba24a7fc70f2a55_train_data.json ds_type: json format: custom path: /workspace/input_data/2ba24a7fc70f2a55_train_data.json type: field_input: operators field_instruction: question_text field_output: decomposition format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso13/f77c4e4c-9f90-4b42-ba25-abe5276e3f78 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000213 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/2ba24a7fc70f2a55_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 130 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: ed2512e8-5071-4d76-b062-b4cab28befe7 wandb_project: 13a wandb_run: your_name wandb_runid: ed2512e8-5071-4d76-b062-b4cab28befe7 warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f77c4e4c-9f90-4b42-ba25-abe5276e3f78 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000213 - train_batch_size: 4 - eval_batch_size: 4 - seed: 130 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0006 | 1 | nan | | 6.7688 | 0.2838 | 500 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
baek26/all_2022_all_6417_bart-base_rl
baek26
"2024-05-07T10:23:31Z"
50
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
"2024-05-07T10:22:55Z"
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="baek26//tmp/tmpviw9847h/baek26/all_2022_all_6417_bart-base_rl") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmpviw9847h/baek26/all_2022_all_6417_bart-base_rl") model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmpviw9847h/baek26/all_2022_all_6417_bart-base_rl") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
kneau007/my-classifier-2
kneau007
"2025-03-15T00:34:38Z"
0
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
"2025-03-15T00:34:17Z"
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: '"MSME AUTO RENEWAL renewal criteria for defaults?"' - text: '"HARIT current account NRI eligibility for overseas projects?"' - text: '"Current account disaster relief funding?"' - text: '"How to track Foreign Exchange transactions online?"' - text: '"RERA Current account insurance for project risks?"' metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: sentence-transformers/paraphrase-mpnet-base-v2 model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.9351851851851852 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 135 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:-----------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Remittance To India | <ul><li>'"International Remittance credit guarantee coverage?"'</li><li>'"Remittance loan eligibility?"'</li><li>'"Transfer Funds digital banking features?"'</li></ul> | | Executive Director’s Profile | <ul><li>'"Executive Director’s role in digital transformation initiatives?"'</li><li>'"Profile of Executive Director accessibility features?"'</li><li>'"Executive Director’s strategic vision for the bank?"'</li></ul> | | INDIAN BANK MUTUAL FUND | <ul><li>'"MUTUAL FUND digital banking features?"'</li><li>'"MUTUAL FUND customer support channels?"'</li><li>'"MUTUAL FUND disaster relief funding?"'</li></ul> | | IND MSME VEHICLE | <ul><li>'"small business vehicle legal compliance certification?"'</li><li>'"MSME Vehicle tax benefits under Section 80C?"'</li><li>'"msme vaahan e-commerce integration support?"'</li></ul> | | Awards & Accolades | <ul><li>'"Awards employee participation incentives?"'</li><li>'"Recognition program budget allocation?"'</li><li>'"Recognition program judging panel?"'</li></ul> | | Supply Chain Finance | <ul><li>'"Trade Finance invoice verification process?"'</li><li>'"Supply Chain Financing Solutions for exporters?"'</li><li>'"Trade Finance eligibility for agricultural suppliers?"'</li></ul> | | Merchant UPI QR Code | <ul><li>'"UPI QR Code Merchants transaction limits per day?"'</li><li>'"UPI Payment Code branding customization rules?"'</li><li>'"Merchant UPI QR Code failure troubleshooting?"'</li></ul> | | Loan/OD against Deposit | <ul><li>'"OD Deposit renewal criteria for existing loans?"'</li><li>'"Overdraft Deposit repayment tenure options?"'</li><li>'"OD Deposit SMS alerts for EMI reminders?"'</li></ul> | | Re-KYC: Periodic Update of KYC Details | <ul><li>'"Re-KYC Periodic KYC corporate account rules?"'</li><li>'"Periodic KYC digital submission process?"'</li><li>'"Re-KYC update confirmation process?"'</li></ul> | | IND SUPER 400 DAYS | <ul><li>'"SUPER 400 DAYS deposit interest calculation method?"'</li><li>'"SUPER 400 DAYS corporate investment limits?"'</li><li>'"SUPER 400 DAYS minor account rules?"'</li></ul> | |  Centralized Pension Processing Centre | <ul><li>'Life certificate submission deadlines?'</li><li>'Tax deduction at source procedures?'</li><li>'Disaster recovery protocols at Management Centre?'</li></ul> | | CA FOR STATE /CENTRAL GOVT & CONSULAR & IND PFMS | <ul><li>'"CA State Central Government digital application process?"'</li><li>'"CA central govt collateral-free limits for public works?"'</li><li>'"CA STATE CENTRAL GOVT CONSULAR PFMS grace period for delayed payments?"'</li></ul> | | Indian Bank, IFSC Banking Unit, GIFT City | <ul><li>'"IFSC Unit GIFT City tax benefits under Section 80G?"'</li><li>'"IFSC Unit GIFT City legal compliance certification?"'</li><li>'"IFSC Unit GIFT City documentation requirements?"'</li></ul> | | F.A.Qs | <ul><li>'"F.A.Qs accessibility features for disabled users?"'</li><li>'"Where to find updated F.A.Qs for internet banking?"'</li><li>'"SCF performance benchmarking tools?"'</li></ul> | |  IND GST ADVANTAGE | <ul><li>'"GST Advantage NRI eligibility for overseas businesses?"'</li><li>'"GST Advantage foreign currency loan options?"'</li><li>'"GST Advantage grace period for delayed refunds?"'</li></ul> | | Digitizing the Indian Banking Experience | <ul><li>'"Digitizing experience corporate account integration?"'</li><li>'"Digitizing experience SMS alert customization?"'</li><li>'"Digitizing experience insurance policy management?"'</li></ul> | | Terms and Conditions Indian Bank Digital Rupee | <ul><li>'"Digital Rupee Terms Conditions for NRI accounts?"'</li><li>'"Digital Rupee Terms Conditions for customer education?"'</li><li>'"Digital Rupee Terms Conditions for tax compliance?"'</li></ul> | | National Common Mobility Card (NCMC) | <ul><li>'"NCMC integration with metro systems?"'</li><li>'"National Common Mobility Card insurance coverage?"'</li><li>'"National Common Mobility Card lost card reporting?"'</li></ul> | | IMAGE | <ul><li>'"Tenders pre-bid meeting schedules?"'</li><li>'"Auctions bid security deposit amounts?"'</li><li>'"Auctions penalty clauses for delays?"'</li></ul> | | IndSMART: Indian Bank’s Omni channel Mobile App | <ul><li>'"IndSMART Omni channel App investment tracking?"'</li><li>'"IndSMART Mobile app transaction limits?"'</li><li>'"Omni channel App insurance policy management?"'</li></ul> | | Disclaimer | <ul><li>'Electronic acceptance validity period?'</li><li>'Terms of Use prohibited activities?'</li><li>'Disclaimer third-party content policies?'</li></ul> | | Term Deposits | <ul><li>'"Term Deposits nomination update process?"'</li><li>'"Floating-rate Term Deposits criteria?"'</li><li>'"Interest calculation method for Term Deposits?"'</li></ul> | | Interest Rates for Small Savings Schemes | <ul><li>'"Small Savings digital banking features?"'</li><li>'"Small Savings insurance coverage?"'</li><li>'"Small Savings Rates for short-term deposits?"'</li></ul> | | NRI and Forex | <ul><li>'"NRI Forex tax implications for remittances?"'</li><li>'"NRI Forex Exchange joint holder norms?"'</li><li>'"NRI Forex insurance coverage options?"'</li></ul> | | Azadi Ka Amrit Mahotsav #TogetherforBiggerThings | <ul><li>'"Azadi Ka Amrit Mahotsav financial literacy drives?"'</li><li>'"Azadi Ka Amrit Mahotsav disaster relief funding?"'</li><li>'"Azadi Ka Amrit Mahotsav digital transformation goals?"'</li></ul> | | IB MSME Jewel Loan | <ul><li>'"msme jewelry loan digital application steps?"'</li><li>'"MSME Jewel Loan disaster relief provisions?"'</li><li>'"msme gold loan SMS alerts setup?"'</li></ul> | | IB Golden Ager Term Deposit Product for Super Senior Citizens | <ul><li>'"Golden Ager Deposit Super Seniors survivor benefits?"'</li><li>'"Term Deposit Senior Citizens interest payout modes?"'</li><li>'"Golden Ager Term Deposit Product dementia patient provisions?"'</li></ul> | | What is a Credit Score and How it Shapes Financial Health | <ul><li>'"Credit Score NRI-specific considerations?"'</li><li>'"Credit Score tracking frequency?"'</li><li>'"Credit Score Financial Health impact on credit cards?"'</li></ul> | | IMPORTANT TERMS AND CONDITIONS CA | <ul><li>'"current Account conditions for overdraft facilities?"'</li><li>'"current Account conditions for mentorship programs?"'</li><li>'"TERMS CONDITIONS CA for disaster relief funding?"'</li></ul> | | Credit Cards | <ul><li>'"Credit card spend analytics tools?"'</li><li>'"Credit card application eligibility criteria?"'</li><li>'"Credit card billing cycle customization?"'</li></ul> | | Managing Director & CEO’s Profile | <ul><li>'"CEO Profile international engagements list?"'</li><li>'"Managing Director’s industry awards list?"'</li><li>'"Managing Director’s branch visit calendar?"'</li></ul> | | Jewel Loan | <ul><li>'"Gold Loan locker linkage benefits?"'</li><li>'"Gold purity certification requirements for loans?"'</li><li>'"Gold Loan joint holder nomination rules?"'</li></ul> | | Corporate Social Responsibility | <ul><li>'"CSR disaster relief protocols for flood-affected areas?"'</li><li>'"CSR transparency in fund utilization audits?"'</li><li>'"Employee participation process in CSR volunteer programs?"'</li></ul> | | Scheme on financing Manufacturers / Suppliers / Vendors of Solar panels and other ancillary products | <ul><li>'"Solar Panel loan mentorship programs?"'</li><li>'"Solar Panel loan interest rate concessions?"'</li><li>'"Financing Scheme Solar Panel Suppliers tax benefits?"'</li></ul> | | Board of Directors | <ul><li>'"Directors digital transformation strategies?"'</li><li>'"Directors policy implementation guidelines?"'</li><li>'"Board Directors legal compliance certifications?"'</li></ul> | | Quick Contact | <ul><li>'"Quick Contact feedback submission process?"'</li><li>'"Instant Contact response timelines?"'</li><li>'"contact details for corporate accounts?"'</li></ul> | | Vision and Mission | <ul><li>'"Vision customer-centric policy changes?"'</li><li>'"Mission employee training modules?"'</li><li>'"Vision-driven product development strategies?"'</li></ul> | | Corporate | <ul><li>'"Corporate Banking loan application workflow?"'</li><li>'"Business Banking payroll integration?"'</li><li>'"Corporate trade finance credit lines?"'</li></ul> | | Service Charges / Forex Rates | <ul><li>'"Currency Exchange Charges for bulk remittances?"'</li><li>'"Currency Exchange Rates holiday adjustments?"'</li><li>'"Currency Exchange Charges GST applicability?"'</li></ul> | | Welfare of Minorities | <ul><li>'"Welfare Minorities loan subsidy schemes?"'</li><li>'"Welfare Minorities microfinance interest rates?"'</li><li>'"Welfare Minorities agricultural subsidy schemes?"'</li></ul> | | Bancassurance | <ul><li>'"Bancassurance legal heir claims process?"'</li><li>'"insurance Solutions for digital transactions?"'</li><li>'"insurance Solutions for health coverage?"'</li></ul> | |  IND PM Vishwakarma | <ul><li>'"PM Vishwakarma NRI eligibility for overseas businesses?"'</li><li>'"PM Vishwakarma tax benefits under Section 80G?"'</li><li>'"PM Vishwakarma grievance escalation matrix?"'</li></ul> | | IB Tax Saver Scheme | <ul><li>'"Tax Saver insurance coverage details?"'</li><li>'"IB Tax Saver Scheme lock-in period options?"'</li><li>'"Tax Saver senior citizen benefits?"'</li></ul> | | Weaver MUDRA | <ul><li>'"Weaver MUDRA credit guarantee coverage?"'</li><li>'"Weaver MUDRA application documentation checklist?"'</li><li>'"Weaver MUDRA raw material purchase support?"'</li></ul> | | Chief Vigilance Officer | <ul><li>'"Chief Vigilance Officer policy exception approvals?"'</li><li>'"CVO investigation timelines for corruption cases?"'</li><li>'"CVO anonymous tip reward system?"'</li></ul> | | Careers | <ul><li>'"Career Opportunities relocation policies?"'</li><li>'"Career Opportunities remote work?"'</li><li>'"Career Opportunities sabbatical policies?"'</li></ul> | | Press Releases | <ul><li>'"Announcements sustainability targets?"'</li><li>'"Press Releases accessibility for visually impaired?"'</li><li>'"Press Releases archive accessibility on the website?"'</li></ul> | | Online Collection Products | <ul><li>'"Online Payment Collection webhook configurations?"'</li><li>'"Online Payment Collection settlement cycles?"'</li><li>'"E-Collection Products chargeback policies?"'</li></ul> | | IND-SURYA SHAKTI | <ul><li>'"SURYA SHAKTI repayment moratorium options?"'</li><li>'"SURYA SHAKTI foreign currency loan options?"'</li><li>'"SURYA SHAKTI credit guarantee coverage?"'</li></ul> | | IND-SME EASE | <ul><li>'"SME EASE disaster relief funding?"'</li><li>'"SME EASE foreign currency loan options?"'</li><li>'"SME EASE grace period for delayed payments?"'</li></ul> | | e-Allahabad Bank Journey | <ul><li>'Allahabad Online Journey third-party integrations?'</li><li>'Digital Journey e-Allahabad UPI integration?'</li><li>'Digital Journey e-Allahabad nominee update steps?'</li></ul> | | IndSMART | <ul><li>'"IndSMART investment tracking options?"'</li><li>'"IndSMART insurance policy management?"'</li><li>'"IndSMART mobile app download instructions?"'</li></ul> | | Fixed Deposit | <ul><li>'"Fixed Deposit SMS balance alerts?"'</li><li>'"Fixed Deposit dormant account reactivation?"'</li><li>'"Fixed Deposit grace period after maturity?"'</li></ul> | | Scheme for financing MSMEs for setting up Biomass Pellets manufacturing units | <ul><li>'"Biomass Pellet grace period for delayed payments?"'</li><li>'"Biomass Pellet interest rates for green energy projects?"'</li><li>'"Biomass Pellet grievance escalation matrix?"'</li></ul> | | TERMS AND CONDITIONS-TERM DEPOSIT ACCOUNT | <ul><li>'"Term Deposit Accounts conditions for insurance claims?"'</li><li>'"Term Deposit Accounts conditions for joint holders?"'</li><li>'"TERMS CONDITIONS TERM DEPOSIT ACCOUNT lien marking?"'</li></ul> | | IND SME Secure | <ul><li>'"SME Secure overdraft against property value?"'</li><li>'"SME Secure legal heir claims process?"'</li><li>'"SME Secure top-up facilities for expansion?"'</li></ul> | | Investor Relations | <ul><li>'"Investor Relations investor feedback mechanisms?"'</li><li>'"Investor Information digital access tools?"'</li><li>'"Investor Information accessibility for non-English speakers?"'</li></ul> | | Ind Advantage (Reward Program) | <ul><li>'Loyalty point accrual rate for credit cards?'</li><li>'Advantage Program tier upgrade requirements?'</li><li>'Advantage Reward point redemption tiers?'</li></ul> | | Welcome aboard! | <ul><li>'"Welcome aboard digital banking setup?"'</li><li>'"Welcome aboard corporate account registration?"'</li><li>'"Welcome aboard account upgrade options?"'</li></ul> | | Deposit Rates | <ul><li>'"Deposit Rates for disaster relief funding?"'</li><li>'"Deposit Rates for dormant accounts?"'</li><li>'"Deposit Rates for overdraft facilities?"'</li></ul> | |  Loyalty Program Delights: Unlocking Special Rewards for Your Loyalty | <ul><li>'"Loyalty Delights: Unlocking Special Rewards tax implications?"'</li><li>'"Loyalty Delights exclusive event invitations?"'</li><li>'"Loyalty Delights point transfer fees?"'</li></ul> | | NETC FASTag | <ul><li>'"FASTag lane violation penalties?"'</li><li>'"NETC Toll Tag damaged replacement?"'</li><li>'"NETC FASTag recharge limits?"'</li></ul> | | IB Tradewell | <ul><li>'"Trade Financing NRI trade loan eligibility?"'</li><li>'"Tradewell joint venture funding norms?"'</li><li>'"Tradewell foreclosure charges for early closure?"'</li></ul> | | PM Surya Ghar: Muft Bijli Yojna | <ul><li>'"PM Surya Ghar digital application portal?"'</li><li>'"PM Surya Ghar Muft Bijli Yojna eligibility criteria?"'</li><li>'"Free Electricity scheme NRI participation rules?"'</li></ul> | | IND SUPREME 300 DAYS | <ul><li>'"Supreme 300 Day Plan insurance coverage details?"'</li><li>'"Supreme 300 Day Plan premature withdrawal penalties?"'</li><li>'"SUPREME 300 DAYS deposit interest compounding frequency?"'</li></ul> | | Sukanya Samriddhi Account | <ul><li>'"Sukanya Samriddhi Account opening age limits?"'</li><li>'"Sukanya Samriddhi Account address update steps?"'</li><li>'"Sukanya Samriddhi Account nomination rules?"'</li></ul> | | IND NAVYA | <ul><li>'"IND NAVYA premium customer benefits?"'</li><li>'"IND NAVYA charity partnership programs?"'</li><li>'"Navya Services loan processing speed?"'</li></ul> | | Short Term Deposits | <ul><li>'"Short Term FD reinvestment options?"'</li><li>'"Grace period after Short Term FD maturity?"'</li><li>'"Minimum balance requirements for Short Term Deposits?"'</li></ul> | | Block Lost ATM Card | <ul><li>'"Block Lost Debit card via IVR options?"'</li><li>'"Deactivate Lost ATM and request replacement?"'</li><li>'"Lost ATM Reporting documentation needs?"'</li></ul> | | MSME DIGI Jewel Loan and Jewel Loan (Re-Pledge) | <ul><li>'"Re-Pledge Jewel MSME documentation for gold valuation?"'</li><li>'"MSME DIGI Jewel joint venture funding norms?"'</li><li>'"jewel pledge repayment moratorium options?"'</li></ul> | | Nodal Officers- Customer service | <ul><li>'"Nodal Officers legal compliance certifications?"'</li><li>'"Nodal Officers audit protocols?"'</li><li>'"Customer Service customer education initiatives?"'</li></ul> | | MSME AUTO RENEWAL | <ul><li>'"MSME AUTO RENEWAL dormant account reactivation?"'</li><li>'"MSME AUTO RENEWAL overdraft against renewed loans?"'</li><li>'"MSME AUTO RENEWAL foreclosure charges for early closure?"'</li></ul> | | IB COMFORT- DOMESTIC AND NRE | <ul><li>'"comfort NRE account grace period for minimum balance?"'</li><li>'"COMFORT DOMESTIC NRE account eligibility for NRIs?"'</li><li>'"COMFORT DOMESTIC NRE account mentorship programs?"'</li></ul> | | LOAN AGAINST LEASE RENTALS | <ul><li>'"Online tracking for LEASE RENTALS loan status?"'</li><li>'"Penalties for delayed LEASE RENTALS repayments?"'</li><li>'"Legal verification steps for LEASE RENTALS?"'</li></ul> | | IB HARIT | <ul><li>'"HARIT current account grace period for minimum balance?"'</li><li>'"HARIT current account mentorship programs?"'</li><li>'"HARIT current account disaster relief funding?"'</li></ul> | | SMS Banking/ Missed Call Service | <ul><li>'"Missed Call service for account statements?"'</li><li>'"Missed Call Banking for card blocking?"'</li><li>'"Missed Call service for FD renewals?"'</li></ul> | | MSME LAP | <ul><li>'"MSME Property loan insurance coverage?"'</li><li>'"MSME Property loan foreclosure charges?"'</li><li>'"MSME LAP renewal criteria for existing loans?"'</li></ul> | | Head Office Departments | <ul><li>'"Head Office employee training programs?"'</li><li>'"Headquarters CSR initiative approvals?"'</li><li>'"Head Office customer complaint escalation?"'</li></ul> | | Current Account | <ul><li>'"Current account SMS alerts for transactions?"'</li><li>'"Current account tax benefits under Section 80G?"'</li><li>'"Current account joint holder norms?"'</li></ul> | | IND COURT | <ul><li>'"IND COURT escrow account management?"'</li><li>'"IND COURT decree implementation guidelines?"'</li><li>'"COURT verdict compliance certification?"'</li></ul> | | Amalgamation | <ul><li>'Corporate Amalgamation tax implications?'</li><li>'Amalgamation grievance redressal contacts?'</li><li>'Amalgamation process timeline for corporate accounts?'</li></ul> | | IND Professional Special | <ul><li>'"Professional Special foreclosure charges for early closure?"'</li><li>'"Professional Special loan eligibility for doctors?"'</li><li>'"Professional Special documentation for practice licenses?"'</li></ul> | | Foreign Branches | <ul><li>'"International Branches disaster relief funding?"'</li><li>'"Foreign Branches credit guarantee coverage?"'</li><li>'"International Branches documentation requirements?"'</li></ul> | | Regulatory Disclosures Section | <ul><li>'"Banking Disclosures quarterly filing calendar?"'</li><li>'"Regulatory Information GDPR compliance status?"'</li><li>'"Regulatory Information update frequency for public records?"'</li></ul> | | N R I / Foreign Exchange | <ul><li>'"Foreign Currency Exchange lock-in period policies?"'</li><li>'"Foreign Exchange third-party authorization process?"'</li><li>'"NRI forex card reload limits?"'</li></ul> | | NRI A/cs | <ul><li>'"NRI account digital banking features?"'</li><li>'"Non Resident Indian account legal compliance certifications?"'</li><li>'"Non Resident Indian account tax implications?"'</li></ul> | | Reverse Mortgage | <ul><li>'"Interest rate types for Reverse Mortgage?"'</li><li>'"Reverse Mortgage counseling sessions availability?"'</li><li>'"Tax implications of Reverse Mortgage proceeds?"'</li></ul> | | Block Lost Credit Card | <ul><li>'"block credit card charges applicable?"'</li><li>'"report credit card loss via mobile app?"'</li><li>'"Credit Blocking confirmation receipt generation?"'</li></ul> | | Death Claim Settlement | <ul><li>'"Death Claim Settlement legal heir claims?"'</li><li>'"Insurance Claim escalation matrix?"'</li><li>'"Death Claim customer education resources?"'</li></ul> | | Online Loans | <ul><li>'"Digital loan tax certificate generation?"'</li><li>'"Digital loan foreclosure process?"'</li><li>'"Digital loan application turnaround time?"'</li></ul> | | Service Charges | <ul><li>'"Fee Structure for loan processing applications?"'</li><li>'"Service Charges for cheque return penalties?"'</li><li>'"Service Fees for standing instruction setup?"'</li></ul> | | Notifications | <ul><li>'"Alerts frequency settings for balance updates?"'</li><li>'"Notifications spam filtering mechanisms?"'</li><li>'"Alerts user consent management?"'</li></ul> | | Corporate Blog | <ul><li>'"Official Blog archive access permissions?"'</li><li>'"Corporate Blog GDPR compliance measures?"'</li><li>'"Corporate Blog SEO best practices?"'</li></ul> | | Premium Current Account | <ul><li>'"Premium Current account grievance escalation matrix?"'</li><li>'"Premium Current account foreclosure charges for early closure?"'</li><li>'"Premium Current account minimum balance requirements?"'</li></ul> | | Pre Approved Business Loan | <ul><li>'"Pre Approved Business loan legal heir claims?"'</li><li>'"Pre Approved Business loan documentation checklist?"'</li><li>'"Pre Approved Business loan digital application steps?"'</li></ul> | | Doorstep Banking | <ul><li>'"Doorstep service availability in rural areas?"'</li><li>'"Doorstep service blackout dates calendar?"'</li><li>'"Doorstep banking agent verification steps?"'</li></ul> | | Point of Sale (PoS) | <ul><li>'PoS transaction decline troubleshooting?'</li><li>'PoS user access levels?'</li><li>'Contactless payment support on PoS?'</li></ul> | | ATM / Debit Cards | <ul><li>'"Debit Cards digital application steps?"'</li><li>'"ATM Debit Cards foreign card transaction fees?"'</li><li>'"ATM Cards maintenance schedules?"'</li></ul> | | BHIM Aadhaar Pay | <ul><li>'"BHIM Aadhaar Pay offline mode?"'</li><li>'"BHIM Aadhaar Pay audit trails?"'</li><li>'"Aadhaar authentication charges?"'</li></ul> | | Capital Gains | <ul><li>'"Capital Gains carry-forward rules?"'</li><li>'"Capital Gains Tax documentation for IT returns?"'</li><li>'"Capital Gains exemptions under Section 54EC?"'</li></ul> | | Indian Bank Financial Results | <ul><li>'"Annual Financial Results disaster relief funding?"'</li><li>'"Annual Financial Results media coverage archive?"'</li><li>'"Financial Performance comparison with industry benchmarks?"'</li></ul> | | Customer Centric Services | <ul><li>'"Customer-Oriented Banking priority queuing?"'</li><li>'"Customer Service multilingual support availability?"'</li><li>'"Customer Service complaint escalation matrix?"'</li></ul> | | Positive Pay System (PPS) | <ul><li>'"Positive Pay Fraud Prevention for high-value cheques?"'</li><li>'"Positive Pay System (PPS) digital onboarding?"'</li><li>'"PPS mobile app features?"'</li></ul> | | Motor Accident Claim Tribunal Deposit(MACAD) Scheme | <ul><li>'"Motor Accident Claims Tribunal Deposit tax implications?"'</li><li>'"MACAD dormant account reactivation steps?"'</li><li>'"MACAD account closure after claim settlement?"'</li></ul> | | IND HEALTH CARE | <ul><li>'"Health Care insurance-linked repayment plans?"'</li><li>'"Health Care research project grants?"'</li><li>'"Health Care legal compliance certification support?"'</li></ul> | | Loan / OD against NSC / KVP / Relief bonds of RBI / LIC policies | <ul><li>'LIC policy loan tax implications?'</li><li>'KVP overdraft renewal process?'</li><li>'NSC partial withdrawal while pledged?'</li></ul> | | Lab Grown Diamond Scheme | <ul><li>'"Lab Grown Diamond e-commerce integration support?"'</li><li>'"Lab Grown Diamond tax benefits under Section 80G?"'</li><li>'"Lab Grown Diamond credit guarantee coverage?"'</li></ul> | |  IB RERA Current Account | <ul><li>'"RERA Current account overdraft facilities?"'</li><li>'"RERA Current account documentation for RERA registration?"'</li><li>'"RERA Current account credit guarantee coverage?"'</li></ul> | | IND EQUIPMENT & WHEELS (CE/CV) scheme | <ul><li>'"CE CV credit guarantee coverage?"'</li><li>'"EQUIPMENT WHEELS collateral requirements for startups?"'</li><li>'"EQUIPMENT WHEELS digital application process?"'</li></ul> | | IMAGE Email Id’s | <ul><li>'"Proper format for addressing IMAGE Email Contacts?"'</li><li>'"IMAGE Email Contacts encryption standards?"'</li><li>'"IMAGE Email Contacts phishing detection protocols?"'</li></ul> | | General Managers | <ul><li>'"General Managers corporate governance roles?"'</li><li>'"Senior Managers training program oversight?"'</li><li>'"General Managers regional jurisdiction details?"'</li></ul> | | Money multiplier Deposits | <ul><li>'"Multiplier Deposit tax-saving benefits?"'</li><li>'"Multiplier Deposit auto-sweep facility thresholds?"'</li><li>'"Money multiplier Deposits grace period policies?"'</li></ul> | | IB Loan against Sovereign Gold Bond | <ul><li>'SGB loan top-up facility availability?'</li><li>'SGB loan repayment tenure options?'</li><li>'Sovereign Jewel Bond foreclosure charges?'</li></ul> | | Branch ATM | <ul><li>'"Branch ATM accessibility for wheelchair users?"'</li><li>'"Branch ATM cash recycling features?"'</li><li>'"Branch ATM contactless transaction limits?"'</li></ul> | | Financial Inclusion in India: Filling the Void | <ul><li>'"Financial Inclusion customer feedback mechanisms?"'</li><li>'"Bridging Financial Gap through women empowerment?"'</li><li>'"Filling Void in access to investment products?"'</li></ul> | | Jaffna Branch | <ul><li>'"Jaffna gold appraisal services?"'</li><li>'"Jaffna SME loan processing time?"'</li><li>'"Jaffna Branch Office forex rates?"'</li></ul> | | Financial Inclusion | <ul><li>'"Financial Inclusion financial literacy drives?"'</li><li>'"Financial Inclusion mobile van service areas?"'</li><li>'"Financial Access through Aadhaar-enabled payments?"'</li></ul> | | Featured Products / Services / Schemes | <ul><li>'Featured Services repayment holiday options?'</li><li>'Featured Products customer support channels?'</li><li>'Highlighted Products interest rate comparison tool?'</li></ul> | | Another Option for Pension | <ul><li>'"Pension Alternatives repayment tenure options?"'</li><li>'"Pension option collateral requirements?"'</li><li>'"Pension Alternatives grievance redressal process?"'</li></ul> | | Bank’s Profile | <ul><li>'"Profile page disaster recovery protocols?"'</li><li>'"Bank’s Profile regulatory compliance status?"'</li><li>'"Contact Us chatbot availability hours?"'</li></ul> | | SHISHU MUDRA | <ul><li>'"SHISHU MUDRA grace period for delayed payments?"'</li><li>'"SHISHU MUDRA documentation for business plans?"'</li><li>'"SHISHU MUDRA collateral-free limits for startups?"'</li></ul> | | e Payment of Indirect Taxes | <ul><li>'"Online Indirect Tax Payment challan generation?"'</li><li>'"Indirect Tax Payment TDS filing integration?"'</li><li>'"Digital Indirect Tax Payment grievance escalation process?"'</li></ul> | | IND Non-Callable Deposit | <ul><li>'"IND Non-Callable Deposit collateral usage for loans?"'</li><li>'"Non Callable Deposit cheque book issuance?"'</li><li>'"Non Callable Deposit nomination rules for minors?"'</li></ul> | | Services rendered free of charge | <ul><li>'"charge free Services insurance coverage?"'</li><li>'"Free Services interest rate structure?"'</li><li>'"No Charge Services repayment tenure options?"'</li></ul> | | Debit Cards | <ul><li>'"Debit card documentation requirements?"'</li><li>'"atm cardless cash withdrawal steps?"'</li><li>'"atm camera surveillance retention policies?"'</li></ul> | | Chief General Managers | <ul><li>"Chief Executives' media interaction policy?"</li><li>"Chief General Managers' policy exception approval?"</li><li>"Senior General Managers' operational targets?"</li></ul> | | Colombo Branch | <ul><li>'"Colombo SME banking specialists?"'</li><li>'"Colombo Branch Office CSR initiatives?"'</li><li>'"Colombo agricultural loan schemes?"'</li></ul> | | Ind Mortgage | <ul><li>'"Ind Mortgage processing timeline for salaried applicants?"'</li><li>'"Penalty clauses for Ind Mortgage prepayment?"'</li><li>'"Ind Mortgage loan restructuring during financial hardship?"'</li></ul> | | Recurring Deposit | <ul><li>'"Regular Recurring Deposit foreclosure charges?"'</li><li>'"Regular Recurring Deposit interest payout modes?"'</li><li>'"Regular Recurring Deposit interest rate tiers?"'</li></ul> | | e Payment of Direct Taxes | <ul><li>'"eTax Payment refund tracking system?"'</li><li>'"Direct Taxes Online Payment API integration?"'</li><li>'"Online Direct Tax Payment GST reconciliation?"'</li></ul> | | Lending rates | <ul><li>'"Lending rates negotiation options for corporates?"'</li><li>'"Lending rates disaster relief concessions?"'</li><li>'"Lending rates escalation matrix for disputes?"'</li></ul> | | Entity: Internet banking | <ul><li>'"Internet banking offline mode availability?"'</li><li>'"Online Banking disaster recovery protocols?"'</li><li>'"Digital Banking insurance policy management?"'</li></ul> | | Customer Complaints / Awareness | <ul><li>'"Customer Complaints mobile app features?"'</li><li>'"Customer Complaints multilingual support?"'</li><li>'"Complaint Management compensation policies?"'</li></ul> | | IND-SME E-VAAHAN | <ul><li>'"vehicle SME grace period for seasonal businesses?"'</li><li>'"vehicle SME grievance escalation matrix?"'</li><li>'"SME E-VAAHAN digital application process?"'</li></ul> | | Variable Recurring Deposit | <ul><li>'"Flexibility in Variable Recurring Deposit amounts?"'</li><li>'"Variable Recurring penalty for missed payments?"'</li><li>'"Variable Recurring Deposit digital statements?"'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.9352 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("kneau007/my-classifier-2") # Run inference preds = model("\"Current account disaster relief funding?\"") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 5.5968 | 11 | | Label | Training Sample Count | |:-----------------------------------------------------------------------------------------------------|:----------------------| | Corporate Blog | 11 | | Doorstep Banking | 17 | | ATM / Debit Cards | 15 | | Amalgamation | 17 | | Another Option for Pension | 17 | | Awards & Accolades | 18 | | Azadi Ka Amrit Mahotsav #TogetherforBiggerThings | 16 | | BHIM Aadhaar Pay | 12 | | Bancassurance | 17 | | Bank’s Profile | 16 | | Block Lost ATM Card | 16 | | Block Lost Credit Card | 17 | | Board of Directors | 15 | | Branch ATM | 14 | | CA FOR STATE /CENTRAL GOVT & CONSULAR & IND PFMS | 18 | | Capital Gains | 18 | | Careers | 15 | | Chief General Managers | 16 | | Chief Vigilance Officer | 19 | | Colombo Branch | 16 | | Corporate | 15 | | Corporate Social Responsibility | 17 | | Credit Cards | 19 | | Current Account | 16 | | Customer Centric Services | 15 | | Customer Complaints / Awareness | 13 | | Death Claim Settlement | 17 | | Debit Cards | 17 | | Deposit Rates | 12 | | Digitizing the Indian Banking Experience | 17 | | Disclaimer | 16 | | Entity: Internet banking | 15 | | Executive Director’s Profile | 17 | | F.A.Qs | 15 | | Featured Products / Services / Schemes | 16 | | Financial Inclusion | 15 | | Financial Inclusion in India: Filling the Void | 16 | | Fixed Deposit | 16 | | Foreign Branches | 15 | | General Managers | 16 | | Head Office Departments | 16 | | IB COMFORT- DOMESTIC AND NRE | 14 | | IB Golden Ager Term Deposit Product for Super Senior Citizens | 18 | | IB HARIT | 15 | | IB Loan against Sovereign Gold Bond | 16 | | IB MSME Jewel Loan | 13 | | IB Tax Saver Scheme | 18 | | IB Tradewell | 17 | | IMAGE | 18 | | IMAGE Email Id’s | 17 | | IMPORTANT TERMS AND CONDITIONS CA | 16 | | IND COURT | 17 | | IND EQUIPMENT & WHEELS (CE/CV) scheme | 12 | | IND HEALTH CARE | 17 | | IND MSME VEHICLE | 17 | | IND NAVYA | 17 | | IND Non-Callable Deposit | 14 | | IND Professional Special | 17 | | IND SME Secure | 18 | | IND SUPER 400 DAYS | 13 | | IND SUPREME 300 DAYS | 18 | | IND-SME E-VAAHAN | 15 | | IND-SME EASE | 12 | | IND-SURYA SHAKTI | 14 | | INDIAN BANK MUTUAL FUND | 18 | | Ind Advantage (Reward Program) | 17 | | Ind Mortgage | 14 | | IndSMART | 15 | | IndSMART: Indian Bank’s Omni channel Mobile App | 14 | | Indian Bank Financial Results | 18 | | Indian Bank, IFSC Banking Unit, GIFT City | 17 | | Interest Rates for Small Savings Schemes | 18 | | Investor Relations | 17 | | Jaffna Branch | 18 | | Jewel Loan | 17 | | LOAN AGAINST LEASE RENTALS | 14 | | Lab Grown Diamond Scheme | 14 | | Lending rates | 17 | | Loan / OD against NSC / KVP / Relief bonds of RBI / LIC policies | 20 | | Loan/OD against Deposit | 15 | | MSME AUTO RENEWAL | 19 | | MSME DIGI Jewel Loan and Jewel Loan (Re-Pledge) | 16 | | MSME LAP | 18 | | Managing Director & CEO’s Profile | 17 | | Merchant UPI QR Code | 14 | | Money multiplier Deposits | 14 | | Motor Accident Claim Tribunal Deposit(MACAD) Scheme | 14 | | N R I / Foreign Exchange | 14 | | NETC FASTag | 17 | | NRI A/cs | 17 | | NRI and Forex | 15 | | National Common Mobility Card (NCMC) | 16 | | Nodal Officers- Customer service | 16 | | Notifications | 16 | | Online Collection Products | 12 | | Online Loans | 15 | | PM Surya Ghar: Muft Bijli Yojna | 19 | | Point of Sale (PoS) | 17 | | Positive Pay System (PPS) | 15 | | Pre Approved Business Loan | 16 | | Premium Current Account | 18 | | Press Releases | 16 | | Quick Contact | 14 | | Re-KYC: Periodic Update of KYC Details | 17 | | Recurring Deposit | 17 | | Regulatory Disclosures Section | 15 | | Remittance To India | 14 | | Reverse Mortgage | 16 | | SHISHU MUDRA | 19 | | SMS Banking/ Missed Call Service | 15 | | Scheme for financing MSMEs for setting up Biomass Pellets manufacturing units | 15 | | Scheme on financing Manufacturers / Suppliers / Vendors of Solar panels and other ancillary products | 15 | | Service Charges | 19 | | Service Charges / Forex Rates | 16 | | Services rendered free of charge | 17 | | Short Term Deposits | 16 | | Sukanya Samriddhi Account | 13 | | Supply Chain Finance | 15 | | TERMS AND CONDITIONS-TERM DEPOSIT ACCOUNT | 18 | | Term Deposits | 16 | | Terms and Conditions Indian Bank Digital Rupee | 17 | | Variable Recurring Deposit | 16 | | Vision and Mission | 16 | | Weaver MUDRA | 18 | | Welcome aboard! | 16 | | Welfare of Minorities | 15 | | What is a Credit Score and How it Shapes Financial Health | 18 | | e Payment of Direct Taxes | 14 | | e Payment of Indirect Taxes | 17 | | e-Allahabad Bank Journey | 13 | |  Centralized Pension Processing Centre | 15 | |  IB RERA Current Account | 19 | |  IND GST ADVANTAGE | 16 | |  IND PM Vishwakarma | 15 | |  Loyalty Program Delights: Unlocking Special Rewards for Your Loyalty | 18 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0002 | 1 | 0.1566 | - | | 0.0093 | 50 | 0.146 | - | | 0.0185 | 100 | 0.1398 | - | | 0.0278 | 150 | 0.1027 | - | | 0.0371 | 200 | 0.0887 | - | | 0.0463 | 250 | 0.071 | - | | 0.0556 | 300 | 0.0644 | - | | 0.0649 | 350 | 0.0492 | - | | 0.0741 | 400 | 0.05 | - | | 0.0834 | 450 | 0.0464 | - | | 0.0927 | 500 | 0.0325 | - | | 0.1019 | 550 | 0.0312 | - | | 0.1112 | 600 | 0.033 | - | | 0.1205 | 650 | 0.0266 | - | | 0.1297 | 700 | 0.0225 | - | | 0.1390 | 750 | 0.0206 | - | | 0.1483 | 800 | 0.0188 | - | | 0.1576 | 850 | 0.0229 | - | | 0.1668 | 900 | 0.0164 | - | | 0.1761 | 950 | 0.0168 | - | | 0.1854 | 1000 | 0.0159 | - | | 0.1946 | 1050 | 0.0131 | - | | 0.2039 | 1100 | 0.0122 | - | | 0.2132 | 1150 | 0.0098 | - | | 0.2224 | 1200 | 0.0093 | - | | 0.2317 | 1250 | 0.0159 | - | | 0.2410 | 1300 | 0.0104 | - | | 0.2502 | 1350 | 0.014 | - | | 0.2595 | 1400 | 0.0074 | - | | 0.2688 | 1450 | 0.0089 | - | | 0.2780 | 1500 | 0.0176 | - | | 0.2873 | 1550 | 0.0118 | - | | 0.2966 | 1600 | 0.0068 | - | | 0.3058 | 1650 | 0.0105 | - | | 0.3151 | 1700 | 0.0076 | - | | 0.3244 | 1750 | 0.0122 | - | | 0.3336 | 1800 | 0.0134 | - | | 0.3429 | 1850 | 0.0088 | - | | 0.3522 | 1900 | 0.0134 | - | | 0.3614 | 1950 | 0.0052 | - | | 0.3707 | 2000 | 0.0074 | - | | 0.3800 | 2050 | 0.0069 | - | | 0.3892 | 2100 | 0.0056 | - | | 0.3985 | 2150 | 0.0059 | - | | 0.4078 | 2200 | 0.0064 | - | | 0.4171 | 2250 | 0.0075 | - | | 0.4263 | 2300 | 0.0064 | - | | 0.4356 | 2350 | 0.0042 | - | | 0.4449 | 2400 | 0.0053 | - | | 0.4541 | 2450 | 0.0061 | - | | 0.4634 | 2500 | 0.0062 | - | | 0.4727 | 2550 | 0.0076 | - | | 0.4819 | 2600 | 0.004 | - | | 0.4912 | 2650 | 0.009 | - | | 0.5005 | 2700 | 0.0096 | - | | 0.5097 | 2750 | 0.0066 | - | | 0.5190 | 2800 | 0.0084 | - | | 0.5283 | 2850 | 0.0052 | - | | 0.5375 | 2900 | 0.0079 | - | | 0.5468 | 2950 | 0.005 | - | | 0.5561 | 3000 | 0.0053 | - | | 0.5653 | 3050 | 0.0055 | - | | 0.5746 | 3100 | 0.0049 | - | | 0.5839 | 3150 | 0.0066 | - | | 0.5931 | 3200 | 0.0074 | - | | 0.6024 | 3250 | 0.0063 | - | | 0.6117 | 3300 | 0.0039 | - | | 0.6209 | 3350 | 0.0042 | - | | 0.6302 | 3400 | 0.0072 | - | | 0.6395 | 3450 | 0.0047 | - | | 0.6487 | 3500 | 0.0037 | - | | 0.6580 | 3550 | 0.0048 | - | | 0.6673 | 3600 | 0.0034 | - | | 0.6766 | 3650 | 0.0067 | - | | 0.6858 | 3700 | 0.0049 | - | | 0.6951 | 3750 | 0.0048 | - | | 0.7044 | 3800 | 0.0041 | - | | 0.7136 | 3850 | 0.0088 | - | | 0.7229 | 3900 | 0.0035 | - | | 0.7322 | 3950 | 0.0047 | - | | 0.7414 | 4000 | 0.005 | - | | 0.7507 | 4050 | 0.0047 | - | | 0.7600 | 4100 | 0.0051 | - | | 0.7692 | 4150 | 0.0035 | - | | 0.7785 | 4200 | 0.0043 | - | | 0.7878 | 4250 | 0.0062 | - | | 0.7970 | 4300 | 0.0029 | - | | 0.8063 | 4350 | 0.0076 | - | | 0.8156 | 4400 | 0.0027 | - | | 0.8248 | 4450 | 0.0026 | - | | 0.8341 | 4500 | 0.0037 | - | | 0.8434 | 4550 | 0.0069 | - | | 0.8526 | 4600 | 0.0037 | - | | 0.8619 | 4650 | 0.0027 | - | | 0.8712 | 4700 | 0.0025 | - | | 0.8804 | 4750 | 0.0024 | - | | 0.8897 | 4800 | 0.0076 | - | | 0.8990 | 4850 | 0.0036 | - | | 0.9082 | 4900 | 0.0028 | - | | 0.9175 | 4950 | 0.0027 | - | | 0.9268 | 5000 | 0.0028 | - | | 0.9361 | 5050 | 0.005 | - | | 0.9453 | 5100 | 0.0041 | - | | 0.9546 | 5150 | 0.0042 | - | | 0.9639 | 5200 | 0.004 | - | | 0.9731 | 5250 | 0.0027 | - | | 0.9824 | 5300 | 0.0049 | - | | 0.9917 | 5350 | 0.0044 | - | ### Framework Versions - Python: 3.11.11 - SetFit: 1.1.1 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.6.0+cu124 - Datasets: 3.4.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
AinTziLLo/Reinforce-elicop-01
AinTziLLo
"2022-12-23T02:36:52Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2022-12-23T02:36:42Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-elicop-01 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 7.30 +/- 5.31 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
bigmorning/distilgpt_new_0080
bigmorning
"2022-07-21T17:20:10Z"
5
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-07-21T11:53:39Z"
--- tags: - generated_from_keras_callback model-index: - name: distilgpt_new_0080 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt_new_0080 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7983 - Validation Loss: 2.6896 - Epoch: 79 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.6632 | 4.5153 | 0 | | 4.4292 | 4.0923 | 1 | | 4.1169 | 3.8723 | 2 | | 3.9326 | 3.7260 | 3 | | 3.8026 | 3.6281 | 4 | | 3.7045 | 3.5355 | 5 | | 3.6254 | 3.4645 | 6 | | 3.5604 | 3.4093 | 7 | | 3.5048 | 3.3587 | 8 | | 3.4569 | 3.3136 | 9 | | 3.4155 | 3.2778 | 10 | | 3.3791 | 3.2443 | 11 | | 3.3470 | 3.2157 | 12 | | 3.3183 | 3.1854 | 13 | | 3.2922 | 3.1642 | 14 | | 3.2685 | 3.1400 | 15 | | 3.2467 | 3.1193 | 16 | | 3.2267 | 3.1009 | 17 | | 3.2078 | 3.0838 | 18 | | 3.1904 | 3.0689 | 19 | | 3.1739 | 3.0520 | 20 | | 3.1584 | 3.0379 | 21 | | 3.1438 | 3.0255 | 22 | | 3.1300 | 3.0116 | 23 | | 3.1168 | 2.9965 | 24 | | 3.1044 | 2.9866 | 25 | | 3.0925 | 2.9752 | 26 | | 3.0812 | 2.9631 | 27 | | 3.0704 | 2.9539 | 28 | | 3.0601 | 2.9458 | 29 | | 3.0502 | 2.9340 | 30 | | 3.0408 | 2.9251 | 31 | | 3.0317 | 2.9179 | 32 | | 3.0230 | 2.9082 | 33 | | 3.0147 | 2.9002 | 34 | | 3.0065 | 2.8948 | 35 | | 2.9987 | 2.8855 | 36 | | 2.9911 | 2.8779 | 37 | | 2.9838 | 2.8706 | 38 | | 2.9767 | 2.8643 | 39 | | 2.9698 | 2.8570 | 40 | | 2.9632 | 2.8501 | 41 | | 2.9567 | 2.8441 | 42 | | 2.9505 | 2.8385 | 43 | | 2.9445 | 2.8327 | 44 | | 2.9385 | 2.8260 | 45 | | 2.9329 | 2.8213 | 46 | | 2.9272 | 2.8160 | 47 | | 2.9217 | 2.8107 | 48 | | 2.9162 | 2.8052 | 49 | | 2.9110 | 2.8020 | 50 | | 2.9060 | 2.7938 | 51 | | 2.9010 | 2.7896 | 52 | | 2.8962 | 2.7857 | 53 | | 2.8913 | 2.7827 | 54 | | 2.8866 | 2.7768 | 55 | | 2.8821 | 2.7724 | 56 | | 2.8776 | 2.7679 | 57 | | 2.8733 | 2.7642 | 58 | | 2.8691 | 2.7610 | 59 | | 2.8649 | 2.7556 | 60 | | 2.8607 | 2.7513 | 61 | | 2.8568 | 2.7485 | 62 | | 2.8529 | 2.7424 | 63 | | 2.8490 | 2.7395 | 64 | | 2.8452 | 2.7383 | 65 | | 2.8414 | 2.7325 | 66 | | 2.8378 | 2.7292 | 67 | | 2.8343 | 2.7251 | 68 | | 2.8307 | 2.7206 | 69 | | 2.8273 | 2.7177 | 70 | | 2.8237 | 2.7138 | 71 | | 2.8204 | 2.7093 | 72 | | 2.8171 | 2.7073 | 73 | | 2.8139 | 2.7057 | 74 | | 2.8106 | 2.7029 | 75 | | 2.8075 | 2.6991 | 76 | | 2.8043 | 2.6961 | 77 | | 2.8013 | 2.6929 | 78 | | 2.7983 | 2.6896 | 79 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
ybzxaa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_hoarse_porcupine
ybzxaa
"2025-04-10T17:28:06Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am rapid hoarse porcupine", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-10T08:07:30Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_hoarse_porcupine tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am rapid hoarse porcupine - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_hoarse_porcupine This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ybzxaa/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_hoarse_porcupine", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.1 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bilguun/mn-g2p-t5-small
bilguun
"2024-05-17T08:41:41Z"
118
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "G2P", "Grapheme-to-Phoneme", "mn", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-18T18:18:21Z"
--- library_name: transformers language: - mn metrics: - wer - phone-error-rate pipeline_tag: text2text-generation tags: - G2P - Grapheme-to-Phoneme --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results WER = 31.4% PER = 13.4% #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johaanm/test-planner-alpha-V6.4
johaanm
"2023-09-03T19:26:38Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-09-03T19:26:34Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
mistysummit/bento
mistysummit
"2025-02-27T09:04:21Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-02-27T08:44:41Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Bento --- # Bento <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Bento` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('mistysummit/bento', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
efieditor/roni-alsheich
efieditor
"2025-02-12T14:28:37Z"
0
0
null
[ "license:other", "region:us" ]
null
"2025-02-12T13:27:42Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
davidschulte/ESM_DBQ__Chanel.Product.prices.United.States_default
davidschulte
"2025-03-26T14:04:22Z"
15
0
null
[ "safetensors", "embedding_space_map", "BaseLM:bert-base-multilingual-uncased", "dataset:DBQ/Chanel.Product.prices.United.States", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "region:us" ]
null
"2024-11-30T16:27:37Z"
--- base_model: bert-base-multilingual-uncased datasets: - DBQ/Chanel.Product.prices.United.States license: apache-2.0 tags: - embedding_space_map - BaseLM:bert-base-multilingual-uncased --- # ESM DBQ/Chanel.Product.prices.United.States <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** DBQ/Chanel.Product.prices.United.States - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** DBQ/Chanel.Product.prices.United.States - **Subset [optional]:** default - **Text Column:** title - **Label Column:** category1_code - **Dataset Split:** train - **Sample size [optional]:** 1452 - **Sample seed [optional]:** ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
9sub/llama3_10epoch
9sub
"2024-11-09T09:07:28Z"
75
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-11-09T09:04:12Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
neilchien/realstate1_1
neilchien
"2025-02-25T10:46:13Z"
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-25T10:44:54Z"
--- base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** neilchien - **License:** apache-2.0 - **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Tess-2.0-Mixtral-GGUF
mradermacher
"2024-12-16T03:37:54Z"
24
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:migtissera/Tess-2.0-Mixtral-8x7B", "base_model:quantized:migtissera/Tess-2.0-Mixtral-8x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-01T21:51:57Z"
--- base_model: migtissera/Tess-2.0-Mixtral-8x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About static quants of https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tess-2.0-Mixtral-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q2_K.gguf) | Q2_K | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.IQ3_S.gguf) | IQ3_S | 20.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q3_K_S.gguf) | Q3_K_S | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.IQ3_M.gguf) | IQ3_M | 21.7 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q3_K_M.gguf) | Q3_K_M | 22.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q3_K_L.gguf) | Q3_K_L | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.IQ4_XS.gguf) | IQ4_XS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q4_K_S.gguf) | Q4_K_S | 27.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q4_K_M.gguf) | Q4_K_M | 28.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q5_K_S.gguf) | Q5_K_S | 32.5 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q5_K_M.gguf) | Q5_K_M | 33.5 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q6_K.gguf) | Q6_K | 38.6 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tess-2.0-Mixtral-GGUF/resolve/main/Tess-2.0-Mixtral.Q8_0.gguf.part2of2) | Q8_0 | 49.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
LandCruiser/Orange_13
LandCruiser
"2025-03-19T08:53:14Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-19T08:22:26Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NLPProject2023Z/roberta-regression
NLPProject2023Z
"2024-01-09T14:03:13Z"
174
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
"2024-01-09T14:02:45Z"
--- tags: - generated_from_trainer model-index: - name: roberta-regression results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-regression This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 84 | 0.5758 | | No log | 2.0 | 168 | 0.5758 | | No log | 3.0 | 252 | 0.5758 | | No log | 4.0 | 336 | 0.5758 | | No log | 5.0 | 420 | 0.5758 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
unsloth/Qwen2.5-Coder-14B-bnb-4bit
unsloth
"2024-11-12T10:03:07Z"
780
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "code", "qwen", "qwen-coder", "codeqwen", "en", "arxiv:2409.12186", "arxiv:2407.10671", "base_model:Qwen/Qwen2.5-Coder-14B", "base_model:quantized:Qwen/Qwen2.5-Coder-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-11-12T03:05:13Z"
--- base_model: Qwen/Qwen2.5-Coder-14B language: - en library_name: transformers license: apache-2.0 tags: - unsloth - transformers - code - qwen - qwen-coder - codeqwen --- # Finetune Llama 3.2, Qwen2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing). Also a [Qwen 2.5 conversational style notebook](https://colab.research.google.com/drive/1qN1CEalC70EO1wGKhNxs1go1W9So61R5?usp=sharing). [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. # unsloth/Qwen2.5-Coder-14B-bnb-4bit ## Introduction Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. **This repo contains the 0.5B Qwen2.5-Coder model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186). ## Requirements The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{hui2024qwen2, title={Qwen2. 5-Coder Technical Report}, author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others}, journal={arXiv preprint arXiv:2409.12186}, year={2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
IDEA-CCNL/Taiyi-BLIP-750M-Chinese
IDEA-CCNL
"2023-06-06T03:17:45Z"
38
14
transformers
[ "transformers", "pytorch", "safetensors", "blip", "image-text-to-text", "image-to-text", "zh", "license:apache-2.0", "region:us" ]
image-to-text
"2023-02-17T07:41:27Z"
--- pipeline_tag: image-to-text tags: - image-to-text license: apache-2.0 language: zh inference: false ---
YakovElm/Apache20Classic_64
YakovElm
"2023-05-27T02:30:58Z"
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-05-27T02:30:21Z"
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Apache20Classic_64 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Apache20Classic_64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1374 - Train Accuracy: 0.9624 - Validation Loss: 0.3081 - Validation Accuracy: 0.9055 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1664 | 0.9620 | 0.3171 | 0.9055 | 0 | | 0.1522 | 0.9624 | 0.2966 | 0.9055 | 1 | | 0.1374 | 0.9624 | 0.3081 | 0.9055 | 2 | ### Framework versions - Transformers 4.29.2 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
tanganke/clip-vit-large-patch14_kmnist
tanganke
"2024-12-13T03:39:54Z"
40
0
null
[ "tensorboard", "safetensors", "clip_vision_model", "dataset:tanganke/kmnist", "base_model:openai/clip-vit-large-patch14", "base_model:finetune:openai/clip-vit-large-patch14", "region:us" ]
null
"2024-12-13T03:38:23Z"
--- base_model: - openai/clip-vit-large-patch14 datasets: - tanganke/kmnist metrics: - accuracy --- # Model Card ## Training Details Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=128). Only the vision encoder is fine-tuned. ## Evaluation Results Test set accuracy: - pre-trained: 0.09709999710321426 - fine-tuned: 0.9829999804496765
akunskripsiapillv1/finetuned-chartinstruct-llama2-statista
akunskripsiapillv1
"2025-03-31T08:02:21Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-31T08:01:54Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cst7/candle_flux_lora_500_style
cst7
"2025-04-04T19:28:19Z"
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-04T19:14:01Z"
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other instance_prompt: a photo of sks candle widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - flux - flux-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux DreamBooth LoRA - cst7/candle_flux_lora_500_style <Gallery /> ## Model description These are cst7/candle_flux_lora_500_style DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md). Was LoRA for the text encoder enabled? True. ## Trigger words You should use `a photo of sks candle` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](cst7/candle_flux_lora_500_style/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('cst7/candle_flux_lora_500_style', weight_name='pytorch_lora_weights.safetensors') image = pipeline('a photo of sks candle').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
rxmha125/my-scratch-transformer-colab
rxmha125
"2025-04-06T17:10:58Z"
0
0
null
[ "pytorch", "Seq2SeqTransformer_Scratch", "region:us" ]
null
"2025-04-06T17:10:48Z"
# Test README for my-scratch-transformer-colab Model files for rxmha125/my-scratch-transformer-colab
shelby-I/medical-chatbot
shelby-I
"2025-03-12T06:45:53Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-12T06:45:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DLBDAlkemy/Meta-Llama-3-8B_continual_kb_all_chunks_AMPLIFON_systemPromptNone_15_v0
DLBDAlkemy
"2025-03-05T14:06:38Z"
99
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-31T14:13:52Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
legacy107/adapter-flan-t5-large-bottleneck-adapter-covidqa-100-true
legacy107
"2023-10-03T03:44:34Z"
0
0
adapter-transformers
[ "adapter-transformers", "t5", "dataset:minh21/COVID-QA-sentence-Question-Answering-Transformer-data", "region:us" ]
null
"2023-10-03T03:07:58Z"
--- tags: - t5 - adapter-transformers datasets: - minh21/COVID-QA-sentence-Question-Answering-Transformer-data --- # Adapter `legacy107/adapter-flan-t5-large-bottleneck-adapter-covidqa-100-true` for google/flan-t5-large An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [minh21/COVID-QA-sentence-Question-Answering-Transformer-data](https://huggingface.co/datasets/minh21/COVID-QA-sentence-Question-Answering-Transformer-data/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("google/flan-t5-large") adapter_name = model.load_adapter("legacy107/adapter-flan-t5-large-bottleneck-adapter-covidqa-100-true", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
sivan22/frozenLake1
sivan22
"2023-03-21T04:51:38Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-03-21T04:51:36Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: frozenLake1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="sivan22/frozenLake1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
davera-017/ppo-SnowballTarget
davera-017
"2023-08-31T18:28:41Z"
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
"2023-08-31T18:28:34Z"
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: davera-017/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
JoAmps/littledataset
JoAmps
"2022-10-26T21:53:03Z"
4
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-10-26T21:39:48Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: littledataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # littledataset This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 169 | 0.0001 | | No log | 2.0 | 338 | 0.0000 | | 0.0036 | 3.0 | 507 | 0.0001 | | 0.0036 | 4.0 | 676 | 0.0000 | | 0.0036 | 5.0 | 845 | 0.0000 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.12.1
syberkrime99/Chaeryeong
syberkrime99
"2023-04-22T05:54:41Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-04-22T05:50:47Z"
--- license: creativeml-openrail-m ---
mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili
mbeukman
"2021-11-25T09:04:02Z"
6
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "NER", "sw", "dataset:masakhaner", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- language: - sw tags: - NER datasets: - masakhaner metrics: - f1 - precision - recall widget: - text: "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." --- # xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili This is a token classification (specifically NER) model that fine-tuned [xlm-roberta-base-finetuned-igbo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) on the [MasakhaNER](https://arxiv.org/abs/2103.11811) dataset, specifically the Swahili part. More information, and other similar models can be found in the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). ## About This model is transformer based and was fine-tuned on the MasakhaNER dataset. It is a named entity recognition dataset, containing mostly news articles in 10 different African languages. The model was fine-tuned for 50 epochs, with a maximum sequence length of 200, 32 batch size, 5e-5 learning rate. This process was repeated 5 times (with different random seeds), and this uploaded model performed the best out of those 5 seeds (aggregate F1 on test set). This model was fine-tuned by me, Michael Beukman while doing a project at the University of the Witwatersrand, Johannesburg. This is version 1, as of 20 November 2021. This model is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Contact & More information For more information about the models, including training scripts, detailed results and further resources, you can visit the the [main Github repository](https://github.com/Michael-Beukman/NERTransfer). You can contact me by filing an issue on this repository. ### Training Resources In the interest of openness, and reporting resources used, we list here how long the training process took, as well as what the minimum resources would be to reproduce this. Fine-tuning each model on the NER dataset took between 10 and 30 minutes, and was performed on a NVIDIA RTX3090 GPU. To use a batch size of 32, at least 14GB of GPU memory was required, although it was just possible to fit these models in around 6.5GB's of VRAM when using a batch size of 1. ## Data The train, evaluation and test datasets were taken directly from the MasakhaNER [Github](https://github.com/masakhane-io/masakhane-ner) repository, with minimal to no preprocessing, as the original dataset is already of high quality. The motivation for the use of this data is that it is the "first large, publicly available, high­ quality dataset for named entity recognition (NER) in ten African languages" ([source](https://arxiv.org/pdf/2103.11811.pdf)). The high-quality data, as well as the groundwork laid by the paper introducing it are some more reasons why this dataset was used. For evaluation, the dedicated test split was used, which is from the same distribution as the training data, so this model may not generalise to other distributions, and further testing would need to be done to investigate this. The exact distribution of the data is covered in detail [here](https://arxiv.org/abs/2103.11811). ## Intended Use This model are intended to be used for NLP research into e.g. interpretability or transfer learning. Using this model in production is not supported, as generalisability and downright performance is limited. In particular, this is not designed to be used in any important downstream task that could affect people, as harm could be caused by the limitations of the model, described next. ## Limitations This model was only trained on one (relatively small) dataset, covering one task (NER) in one domain (news articles) and in a set span of time. The results may not generalise, and the model may perform badly, or in an unfair / biased way if used on other tasks. Although the purpose of this project was to investigate transfer learning, the performance on languages that the model was not trained for does suffer. Because this model used xlm-roberta-base as its starting point (potentially with domain adaptive fine-tuning on specific languages), this model's limitations can also apply here. These can include being biased towards the hegemonic viewpoint of most of its training data, being ungrounded and having subpar results on other languages (possibly due to unbalanced training data). As [Adelani et al. (2021)](https://arxiv.org/abs/2103.11811) showed, the models in general struggled with entities that were either longer than 3 words and entities that were not contained in the training data. This could bias the models towards not finding, e.g. names of people that have many words, possibly leading to a misrepresentation in the results. Similarly, names that are uncommon, and may not have been found in the training data (due to e.g. different languages) would also be predicted less often. Additionally, this model has not been verified in practice, and other, more subtle problems may become prevalent if used without any verification that it does what it is supposed to. ### Privacy & Ethical Considerations The data comes from only publicly available news sources, the only available data should cover public figures and those that agreed to be reported on. See the original MasakhaNER paper for more details. No explicit ethical considerations or adjustments were made during fine-tuning of this model. ## Metrics The language adaptive models achieve (mostly) superior performance over starting with xlm-roberta-base. Our main metric was the aggregate F1 score for all NER categories. These metrics are on the test set for MasakhaNER, so the data distribution is similar to the training set, so these results do not directly indicate how well these models generalise. We do find large variation in transfer results when starting from different seeds (5 different seeds were tested), indicating that the fine-tuning process for transfer might be unstable. The metrics used were chosen to be consistent with previous work, and to facilitate research. Other metrics may be more appropriate for other purposes. ## Caveats and Recommendations In general, this model performed worse on the 'date' category compared to others, so if dates are a critical factor, then that might need to be taken into account and addressed, by for example collecting and annotating more data. ## Model Structure Here are some performance details on this specific model, compared to others we trained. All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category. This model can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)): Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) | | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | [xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili) (This model) | [ibo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-igbo) | swa | 87.75 | 86.55 | 88.97 | 85.00 | 92.00 | 77.00 | 91.00 | | [xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili) | [hau](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-hausa) | swa | 88.36 | 86.95 | 89.82 | 86.00 | 91.00 | 77.00 | 94.00 | | [xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-kinyarwanda-finetuned-ner-swahili) | [kin](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-kinyarwanda) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-swahili) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | swa | 88.93 | 87.64 | 90.25 | 83.00 | 92.00 | 79.00 | 95.00 | | [xlm-roberta-base-finetuned-luo-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luo-finetuned-ner-swahili) | [luo](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luo) | swa | 87.93 | 86.91 | 88.97 | 83.00 | 91.00 | 76.00 | 94.00 | | [xlm-roberta-base-finetuned-naija-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-naija-finetuned-ner-swahili) | [pcm](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-naija) | swa | 87.26 | 85.15 | 89.48 | 83.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-swahili) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | swa | 90.36 | 88.59 | 92.20 | 86.00 | 93.00 | 79.00 | 96.00 | | [xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-wolof-finetuned-ner-swahili) | [wol](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-wolof) | swa | 87.80 | 86.50 | 89.14 | 86.00 | 90.00 | 78.00 | 93.00 | | [xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-yoruba-finetuned-ner-swahili) | [yor](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-yoruba) | swa | 87.73 | 86.67 | 88.80 | 85.00 | 91.00 | 75.00 | 93.00 | | [xlm-roberta-base-finetuned-ner-swahili](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili) | [base](https://huggingface.co/xlm-roberta-base) | swa | 88.71 | 86.84 | 90.67 | 83.00 | 91.00 | 79.00 | 95.00 | ## Usage To use this model (or others), you can do the following, just changing the model name ([source](https://huggingface.co/dslim/bert-base-NER)): ``` from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_name = 'mbeukman/xlm-roberta-base-finetuned-igbo-finetuned-ner-swahili' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa , watu takriban 14 zaidi wamepata maambukizi ya Covid - 19 ." ner_results = nlp(example) print(ner_results) ```
butlert/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters
butlert
"2023-11-21T19:31:40Z"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
"2023-11-21T19:31:38Z"
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
isspek/xlnet-base-cased_ebola_chatgpt_2_2e-5_16
isspek
"2024-11-23T15:58:39Z"
117
0
transformers
[ "transformers", "safetensors", "xlnet", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-11-23T15:58:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TFOCUS/mjmoney
TFOCUS
"2025-03-10T13:16:51Z"
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2025-03-10T13:13:32Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
facebook/mms-tts-azj-script_cyrillic
facebook
"2023-09-01T13:50:19Z"
116
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
"2023-09-01T13:50:04Z"
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Azerbaijani, North Text-to-Speech This repository contains the **Azerbaijani, North (azj-script_cyrillic)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-azj-script_cyrillic") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-azj-script_cyrillic") text = "some example text in the Azerbaijani, North language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
DreadPoor/Heart_Stolen-8B-Model_Stock
DreadPoor
"2025-01-19T15:54:53Z"
75
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:DreadPoor/ScaduTorrent1.1-8b-model_stock", "base_model:merge:DreadPoor/ScaduTorrent1.1-8b-model_stock", "base_model:NeverSleep/Lumimaid-v0.2-8B", "base_model:merge:NeverSleep/Lumimaid-v0.2-8B", "base_model:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2", "base_model:merge:Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2", "base_model:ResplendentAI/NoWarning_Llama3", "base_model:merge:ResplendentAI/NoWarning_Llama3", "base_model:ResplendentAI/Smarts_Llama3", "base_model:merge:ResplendentAI/Smarts_Llama3", "base_model:ResplendentAI/Theory_of_Mind_Llama3", "base_model:merge:ResplendentAI/Theory_of_Mind_Llama3", "base_model:Sao10K/L3-8B-Stheno-v3.2", "base_model:merge:Sao10K/L3-8B-Stheno-v3.2", "base_model:akjindal53244/Llama-3.1-Storm-8B", "base_model:merge:akjindal53244/Llama-3.1-Storm-8B", "base_model:kloodia/lora-8b-bio", "base_model:merge:kloodia/lora-8b-bio", "base_model:kloodia/lora-8b-medic", "base_model:merge:kloodia/lora-8b-medic", "base_model:kloodia/lora-8b-physic", "base_model:merge:kloodia/lora-8b-physic", "base_model:mlabonne/Hermes-3-Llama-3.1-8B-lorablated", "base_model:merge:mlabonne/Hermes-3-Llama-3.1-8B-lorablated", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-09T16:14:37Z"
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - akjindal53244/Llama-3.1-Storm-8B - ResplendentAI/NoWarning_Llama3 - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 - kloodia/lora-8b-physic - Sao10K/L3-8B-Stheno-v3.2 - ResplendentAI/Smarts_Llama3 - NeverSleep/Lumimaid-v0.2-8B - ResplendentAI/Theory_of_Mind_Llama3 - DreadPoor/ScaduTorrent1.1-8b-model_stock - kloodia/lora-8b-medic - mlabonne/Hermes-3-Llama-3.1-8B-lorablated - kloodia/lora-8b-bio model-index: - name: Heart_Stolen-8B-Model_Stock results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 72.45 name: strict accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Heart_Stolen-8B-Model_Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 34.44 name: normalized accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Heart_Stolen-8B-Model_Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 14.65 name: exact match source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Heart_Stolen-8B-Model_Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 8.95 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Heart_Stolen-8B-Model_Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 12.36 name: acc_norm source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Heart_Stolen-8B-Model_Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 31.04 name: accuracy source: url: >- https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Heart_Stolen-8B-Model_Stock name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ![image/png](https://cdn-uploads.huggingface.co/production/uploads/632149f88c0da827c72dccde/UVOwT7Gsqu3My_2TMGqb1.png) ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) + [ResplendentAI/Smarts_Llama3](https://huggingface.co/ResplendentAI/Smarts_Llama3) as a base. ### Models Merged The following models were included in the merge: * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) + [ResplendentAI/NoWarning_Llama3](https://huggingface.co/ResplendentAI/NoWarning_Llama3) * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic) * [NeverSleep/Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B) + [ResplendentAI/Theory_of_Mind_Llama3](https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3) * [DreadPoor/ScaduTorrent1.1-8b-model_stock](https://huggingface.co/DreadPoor/ScaduTorrent1.1-8b-model_stock) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic) * [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated) + [kloodia/lora-8b-bio](https://huggingface.co/kloodia/lora-8b-bio) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2+kloodia/lora-8b-physic - model: akjindal53244/Llama-3.1-Storm-8B+ResplendentAI/NoWarning_Llama3 - model: DreadPoor/ScaduTorrent1.1-8b-model_stock+kloodia/lora-8b-medic - model: NeverSleep/Lumimaid-v0.2-8B+ResplendentAI/Theory_of_Mind_Llama3 - model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated+kloodia/lora-8b-bio merge_method: model_stock base_model: Sao10K/L3-8B-Stheno-v3.2+ResplendentAI/Smarts_Llama3 normalize: false int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__Heart_Stolen-8B-Model_Stock) | Metric |Value| |-------------------|----:| |Avg. |28.98| |IFEval (0-Shot) |72.45| |BBH (3-Shot) |34.44| |MATH Lvl 5 (4-Shot)|14.65| |GPQA (0-shot) | 8.95| |MuSR (0-shot) |12.36| |MMLU-PRO (5-shot) |31.04|
manuth/whisper-base-khmer
manuth
"2025-02-20T17:23:24Z"
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:fleurs", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-02-19T04:40:03Z"
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer datasets: - fleurs metrics: - wer model-index: - name: whisper-base-khmer results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: fleurs type: fleurs config: km_kh split: test args: km_kh metrics: - name: Wer type: wer value: 0.9567538446468802 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-khmer This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the fleurs dataset. It achieves the following results on the evaluation set: - Loss: 0.6861 - Wer: 0.9568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1913 | 1.0 | 158 | 1.1945 | 1.0348 | | 0.8548 | 2.0 | 316 | 0.8276 | 0.9761 | | 0.6434 | 3.0 | 474 | 0.6861 | 0.9568 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.1 - Tokenizers 0.21.0
mlx-community/starcoder2-15b-instruct-v0.1
mlx-community
"2024-04-30T14:46:24Z"
10
0
transformers
[ "transformers", "safetensors", "starcoder2", "text-generation", "code", "mlx", "conversational", "dataset:bigcode/self-oss-instruct-sc2-exec-filter-50k", "base_model:bigcode/starcoder2-15b", "base_model:finetune:bigcode/starcoder2-15b", "license:bigcode-openrail-m", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-30T14:09:19Z"
--- license: bigcode-openrail-m library_name: transformers tags: - code - mlx base_model: bigcode/starcoder2-15b datasets: - bigcode/self-oss-instruct-sc2-exec-filter-50k pipeline_tag: text-generation model-index: - name: starcoder2-15b-instruct-v0.1 results: - task: type: text-generation dataset: name: LiveCodeBench (code generation) type: livecodebench-codegeneration metrics: - type: pass@1 value: 20.4 - task: type: text-generation dataset: name: LiveCodeBench (self repair) type: livecodebench-selfrepair metrics: - type: pass@1 value: 20.9 - task: type: text-generation dataset: name: LiveCodeBench (test output prediction) type: livecodebench-testoutputprediction metrics: - type: pass@1 value: 29.8 - task: type: text-generation dataset: name: LiveCodeBench (code execution) type: livecodebench-codeexecution metrics: - type: pass@1 value: 28.1 - task: type: text-generation dataset: name: HumanEval type: humaneval metrics: - type: pass@1 value: 72.6 - task: type: text-generation dataset: name: HumanEval+ type: humanevalplus metrics: - type: pass@1 value: 63.4 - task: type: text-generation dataset: name: MBPP type: mbpp metrics: - type: pass@1 value: 75.2 - task: type: text-generation dataset: name: MBPP+ type: mbppplus metrics: - type: pass@1 value: 61.2 - task: type: text-generation dataset: name: DS-1000 type: ds-1000 metrics: - type: pass@1 value: 40.6 --- # mlx-community/starcoder2-15b-instruct-v0.1 This model was converted to MLX format from [`bigcode/starcoder2-15b-instruct-v0.1`]() using mlx-lm version **0.12.1**. Refer to the [original model card](https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/starcoder2-15b-instruct-v0.1") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
haedahae/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_domestic_cow
haedahae
"2025-04-10T17:06:31Z"
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am huge domestic cow", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-08T12:46:26Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_domestic_cow tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am huge domestic cow - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_domestic_cow This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="haedahae/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_domestic_cow", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.1 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```