modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-29 06:27:49
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-29 06:23:06
card
stringlengths
11
1.01M
MinkyuRamen/facebook_bart-base_en2ko_translation_PREFIX_TUNING_v2
MinkyuRamen
2024-03-17T19:22:37Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:facebook/bart-base", "base_model:adapter:facebook/bart-base", "region:us" ]
null
2024-03-17T19:22:34Z
--- library_name: peft base_model: facebook/bart-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
bartowski/Mistral-Evolved-11b-v0.1-GGUF
bartowski
2024-03-17T19:20:25Z
457
7
null
[ "gguf", "text-generation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T14:27:35Z
--- license: apache-2.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of Mistral-Evolved-11b-v0.1 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization. Original model: https://huggingface.co/Replete-AI/Mistral-Evolved-11b-v0.1 Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Mistral-Evolved-11b-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q8_0.gguf) | Q8_0 | 11.86GB | Extremely high quality, generally unneeded but max available quant. | | [Mistral-Evolved-11b-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q6_K.gguf) | Q6_K | 9.16GB | Very high quality, near perfect, *recommended*. | | [Mistral-Evolved-11b-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q5_K_M.gguf) | Q5_K_M | 7.90GB | High quality, very usable. | | [Mistral-Evolved-11b-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q5_K_S.gguf) | Q5_K_S | 7.69GB | High quality, very usable. | | [Mistral-Evolved-11b-v0.1-Q5_0.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q5_0.gguf) | Q5_0 | 7.69GB | High quality, older format, generally not recommended. | | [Mistral-Evolved-11b-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q4_K_M.gguf) | Q4_K_M | 6.72GB | Good quality, similar to 4.25 bpw. | | [Mistral-Evolved-11b-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q4_K_S.gguf) | Q4_K_S | 6.36GB | Slightly lower quality with small space savings. | | [Mistral-Evolved-11b-v0.1-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q4_0.gguf) | Q4_0 | 6.31GB | Decent quality, older format, generally not recommended. | | [Mistral-Evolved-11b-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q3_K_L.gguf) | Q3_K_L | 5.87GB | Lower quality but usable, good for low RAM availability. | | [Mistral-Evolved-11b-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q3_K_M.gguf) | Q3_K_M | 5.40GB | Even lower quality. | | [Mistral-Evolved-11b-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q3_K_S.gguf) | Q3_K_S | 4.85GB | Low quality, not recommended. | | [Mistral-Evolved-11b-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-Evolved-11b-v0.1-GGUF/blob/main/Mistral-Evolved-11b-v0.1-Q2_K.gguf) | Q2_K | 4.16GB | Extremely low quality, *not* recommended. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
automerger/AlloyingotneoyExperiment28-7B
automerger
2024-03-17T19:09:24Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:nlpguy/AlloyIngotNeoY", "base_model:merge:nlpguy/AlloyIngotNeoY", "base_model:yam-peleg/Experiment28-7B", "base_model:merge:yam-peleg/Experiment28-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T19:08:31Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - nlpguy/AlloyIngotNeoY - yam-peleg/Experiment28-7B --- # AlloyingotneoyExperiment28-7B AlloyingotneoyExperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [nlpguy/AlloyIngotNeoY](https://huggingface.co/nlpguy/AlloyIngotNeoY) * [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: nlpguy/AlloyIngotNeoY layer_range: [0, 32] - model: yam-peleg/Experiment28-7B layer_range: [0, 32] merge_method: slerp base_model: nlpguy/AlloyIngotNeoY parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/AlloyingotneoyExperiment28-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
santhoshmlops/Mistral-7B-Instruct-v0.2-SFT
santhoshmlops
2024-03-17T19:02:42Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-03-12T04:02:12Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: Mistral-7B-Instruct-v0.2-SFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-Instruct-v0.2-SFT This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 9 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
jgibb/t-5_base_extractive_512_375
jgibb
2024-03-17T18:57:45Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-17T17:59:51Z
--- license: apache-2.0 base_model: t5-base tags: - generated_from_trainer metrics: - rouge - wer model-index: - name: t-5_base_extractive_512_375 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t-5_base_extractive_512_375 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2983 - Rouge1: 0.6512 - Rouge2: 0.3701 - Rougel: 0.5849 - Rougelsum: 0.585 - Wer: 0.5233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------:| | No log | 0.13 | 250 | 1.4099 | 0.6341 | 0.3466 | 0.5641 | 0.5641 | 0.5468 | | 1.898 | 0.27 | 500 | 1.3537 | 0.6413 | 0.3565 | 0.5736 | 0.5737 | 0.5367 | | 1.898 | 0.4 | 750 | 1.3306 | 0.645 | 0.3619 | 0.5779 | 0.578 | 0.5309 | | 1.4411 | 0.53 | 1000 | 1.3141 | 0.6481 | 0.3659 | 0.581 | 0.5812 | 0.5275 | | 1.4411 | 0.66 | 1250 | 1.3056 | 0.6502 | 0.3684 | 0.5833 | 0.5835 | 0.525 | | 1.4014 | 0.8 | 1500 | 1.3004 | 0.6507 | 0.3694 | 0.5843 | 0.5845 | 0.5235 | | 1.4014 | 0.93 | 1750 | 1.2983 | 0.6512 | 0.3701 | 0.5849 | 0.585 | 0.5233 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
selmamalak/chest-vit-base-finetuned
selmamalak
2024-03-17T18:56:53Z
4
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:adapter:google/vit-base-patch16-224-in21k", "license:apache-2.0", "region:us" ]
null
2024-03-17T18:44:36Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 base_model: google/vit-base-patch16-224-in21k model-index: - name: chest-vit-base-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chest-vit-base-finetuned This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1026 - Accuracy: 0.9622 - Precision: 0.9506 - Recall: 0.9596 - F1: 0.9549 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.211 | 0.99 | 63 | 0.1140 | 0.9605 | 0.9401 | 0.9616 | 0.9501 | | 0.1911 | 1.99 | 127 | 0.1517 | 0.9330 | 0.8989 | 0.9483 | 0.9186 | | 0.1695 | 3.0 | 191 | 0.1163 | 0.9579 | 0.9354 | 0.9609 | 0.9471 | | 0.1556 | 4.0 | 255 | 0.1159 | 0.9571 | 0.9669 | 0.9220 | 0.9417 | | 0.173 | 4.99 | 318 | 0.1166 | 0.9502 | 0.9229 | 0.9578 | 0.9381 | | 0.1485 | 5.99 | 382 | 0.0825 | 0.9717 | 0.9578 | 0.9702 | 0.9638 | | 0.1854 | 7.0 | 446 | 0.0878 | 0.9717 | 0.9578 | 0.9702 | 0.9638 | | 0.1353 | 8.0 | 510 | 0.1060 | 0.9588 | 0.9351 | 0.9647 | 0.9484 | | 0.1196 | 8.99 | 573 | 0.0882 | 0.9691 | 0.9527 | 0.9695 | 0.9607 | | 0.1218 | 9.88 | 630 | 0.0982 | 0.9639 | 0.9419 | 0.9703 | 0.9548 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Samvardhan777/gemma-7b-unsloth-german-to-English-v1
Samvardhan777
2024-03-17T18:29:33Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-17T18:29:07Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** Samvardhan777 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Samvardhan777/gemma-7b-unsloth-german-to-English-v1-merged
Samvardhan777
2024-03-17T18:28:29Z
2
0
transformers
[ "transformers", "pytorch", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T18:21:03Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl - sft base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** Samvardhan777 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Atnafu/English-Amharic-MT
Atnafu
2024-03-17T18:27:24Z
112
0
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "translation", "am", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-09-22T23:29:38Z
--- license: apache-2.0 language: - am - en metrics: - bleu pipeline_tag: translation ---
Holarissun/REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.95_epoch2-subset7000
Holarissun
2024-03-17T18:25:39Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T18:25:35Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.95_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.95_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
datajose/shawgpt-ft
datajose
2024-03-17T18:25:21Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-03-10T23:50:40Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ model-index: - name: shawgpt-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shawgpt-ft This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3011 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8126 | 1.0 | 135 | 3.6444 | | 3.4888 | 2.0 | 271 | 3.3927 | | 3.3257 | 2.99 | 405 | 3.3011 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
datajose/pruebas-ft
datajose
2024-03-17T18:25:18Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-10T23:33:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vinothkumar1996/AI
vinothkumar1996
2024-03-17T18:22:35Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "NousResearch/Hermes-2-Pro-Mistral-7B", "mistralai/Mistral-7B-Instruct-v0.2", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:merge:mistralai/Mistral-7B-Instruct-v0.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T18:18:23Z
--- tags: - merge - mergekit - lazymergekit - NousResearch/Hermes-2-Pro-Mistral-7B - mistralai/Mistral-7B-Instruct-v0.2 base_model: - NousResearch/Hermes-2-Pro-Mistral-7B - mistralai/Mistral-7B-Instruct-v0.2 --- # AI AI is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Hermes-2-Pro-Mistral-7B layer_range: [0, 32] - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "vinothkumar1996/AI" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Holarissun/REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.95_epoch2-subset7000
Holarissun
2024-03-17T18:21:53Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T18:21:50Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.95_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.95_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Holarissun/REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphaorig_beta1.0_epoch2-subset7000
Holarissun
2024-03-17T18:21:47Z
2
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T18:21:43Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphaorig_beta1.0_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # REP17X2_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphaorig_beta1.0_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
michaelcheung/bert-finetuned-ner
michaelcheung
2024-03-17T18:17:58Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-17T11:43:35Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0640 - Precision: 0.9330 - Recall: 0.9488 - F1: 0.9408 - Accuracy: 0.9857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0751 | 1.0 | 1756 | 0.0716 | 0.9029 | 0.9325 | 0.9175 | 0.9803 | | 0.0342 | 2.0 | 3512 | 0.0726 | 0.9327 | 0.9443 | 0.9385 | 0.9846 | | 0.0208 | 3.0 | 5268 | 0.0640 | 0.9330 | 0.9488 | 0.9408 | 0.9857 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
AlignmentResearch/robust_llm_pythia-imdb-2.8b-mz-ada-v3
AlignmentResearch
2024-03-17T18:12:09Z
3
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-2.8b-deduped", "base_model:finetune:EleutherAI/pythia-2.8b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-03-17T18:10:22Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-2.8b-deduped model-index: - name: robust_llm_pythia-imdb-2.8b-mz-ada-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-imdb-2.8b-mz-ada-v3 This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 8 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
Mattizza/PPO-LunarLander-v2_v0__DeepRLCourse
Mattizza
2024-03-17T18:08:33Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "gymnasium", "en", "license:mit", "model-index", "region:us" ]
reinforcement-learning
2024-03-17T17:41:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 - gymnasium model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.37 +/- 27.14 name: mean_reward verified: false license: mit language: - en pipeline_tag: reinforcement-learning --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). It also represents my first attempt to effectively train a RL agent using **StableBaselines3** and **Gymnasium**, done during the 🤗 Deep Reinforcement Learning Course. ## Usage (with Stable-baselines3) ```python import gymnasium as gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.monitor import Monitor from stable_baselines3.common.evaluation import evaluate_policy repo_id = "Mattizza/PPO-LunarLander-v2_v0__DeepRLCourse" filename = "ppo-LunarLander-v2_v0.zip" checkpoint = load_from_hub(repo_id, filename) model = PPO.load(checkpoint, print_system_info=True) # Evaluate the agent eval_env = Monitor(gym.make("LunarLander-v2")) mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") ```
digiplay/snowpear_anime
digiplay
2024-03-17T18:00:57Z
323
5
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-15T20:28:06Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/114679/snowpearanime
Soyeda10/BanglaTextSummarization
Soyeda10
2024-03-17T18:00:17Z
161
1
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-17T17:59:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GorBatchLab/bert-finetuned-squad
GorBatchLab
2024-03-17T17:59:57Z
134
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-03-16T15:18:00Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
JeanL-0/AnalysisIA
JeanL-0
2024-03-17T17:51:33Z
0
0
null
[ "region:us" ]
null
2024-03-17T17:04:34Z
# Modelo de Previsão de Preço do Bitcoin Este modelo é um RandomForestClassifier que foi treinado para prever se o preço do Bitcoin aumentará ou diminuirá com base em indicadores técnicos. ## Como usar Para fazer previsões com este modelo, instancie a classe `BitcoinPricePredictor` e chame o método `predict` passando os dados de entrada. ## Treinamento O modelo foi treinado usando dados históricos do Bitcoin com indicadores técnicos como SMA e RSI. O treinamento envolveu a otimização de hiperparâmetros usando GridSearchCV.
selmamalak/chest-swin-large-finetuned
selmamalak
2024-03-17T17:41:46Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:microsoft/swin-large-patch4-window7-224-in22k", "base_model:adapter:microsoft/swin-large-patch4-window7-224-in22k", "license:apache-2.0", "region:us" ]
null
2024-03-17T17:19:42Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 base_model: microsoft/swin-large-patch4-window7-224-in22k model-index: - name: chest-swin-large-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chest-swin-large-finetuned This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-large-patch4-window7-224-in22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1159 - Accuracy: 0.9588 - Precision: 0.9599 - Recall: 0.9401 - F1: 0.9492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.3305 | 0.99 | 63 | 0.1600 | 0.9365 | 0.9478 | 0.8868 | 0.9119 | | 0.2335 | 1.99 | 127 | 0.1552 | 0.9313 | 0.8968 | 0.9472 | 0.9166 | | 0.1977 | 3.0 | 191 | 0.0855 | 0.9734 | 0.9608 | 0.9714 | 0.9659 | | 0.1746 | 4.0 | 255 | 0.0870 | 0.9794 | 0.9794 | 0.9669 | 0.9729 | | 0.1797 | 4.99 | 318 | 0.0829 | 0.9700 | 0.9549 | 0.9690 | 0.9617 | | 0.1436 | 5.99 | 382 | 0.0797 | 0.9708 | 0.9556 | 0.9707 | 0.9628 | | 0.1632 | 7.0 | 446 | 0.0816 | 0.9700 | 0.9508 | 0.9754 | 0.9621 | | 0.1125 | 8.0 | 510 | 0.1007 | 0.9614 | 0.9365 | 0.9717 | 0.9519 | | 0.1076 | 8.99 | 573 | 0.0900 | 0.9691 | 0.9482 | 0.9770 | 0.9612 | | 0.1188 | 9.88 | 630 | 0.1064 | 0.9622 | 0.9377 | 0.9723 | 0.9530 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Owhslp/nous_researcher_tuning_2_82
Owhslp
2024-03-17T17:38:42Z
3
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T17:19:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ajibawa-2023/OpenHermes-2.5-Code-290k-13B
ajibawa-2023
2024-03-17T17:32:41Z
130
11
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "code", "finetune", "synthetic data", "text-generation-inference", "conversational", "en", "dataset:ajibawa-2023/OpenHermes-2.5-Code-290k", "dataset:teknium/OpenHermes-2.5", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-01T09:15:13Z
--- language: - en license: apache-2.0 tags: - code - finetune - synthetic data - text-generation-inference - conversational datasets: - ajibawa-2023/OpenHermes-2.5-Code-290k - teknium/OpenHermes-2.5 model-index: - name: OpenHermes-2.5-Code-290k-13B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 57.34 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.48 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 56.53 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.5 source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.82 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.3 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B name: Open LLM Leaderboard --- **OpenHermes-2.5-Code-290k-13B** OpenHermes-2.5-Code-290k-13B is a state of the art Llama-2 Fine-tune, which is trained on additional code dataset. This Model is much better than teknium's [model](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B). You can check the **Eval results** below. This model is trained on my existing dataset [OpenHermes-2.5-Code-290k](https://huggingface.co/datasets/ajibawa-2023/OpenHermes-2.5-Code-290k). This dataset is amalgamation of two datasets. I have used [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) a super quality dataset made avaliable by teknium. Other datset is my own [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). Dataset is in Vicuna/ShareGPT format. There are around **1.29 million** set of conversations. I have cleaned the dataset provided by Teknium and removed metadata such as "source" & "category" etc. This dataset has primarily synthetically generated instruction and chat samples. This model has enhanced coding capabilities besides other capabilities such as **Blogging, story generation, Q&A and many more**. **Training:** Entire model was trained on 4 x A100 80GB. For 2 epoch, training took **21 Days**. Fschat & DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. This is a full fine tuned model. Links for quantized models will be updated soon. **GPTQ, GGUF, AWQ & Exllama** GPTQ: TBA GGUF: [Link](https://huggingface.co/LoneStriker/OpenHermes-2.5-Code-290k-13B-GGUF) AWQ: TBA Exllama v2: [Link](https://huggingface.co/bartowski/OpenHermes-2.5-Code-290k-13B-exl2) Special Thanks to [LoneStriker](https://huggingface.co/LoneStriker) and [bartowski](https://huggingface.co/bartowski/) for quantising. **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. It can generate Story, Blogs ..... Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** I will update soon. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__OpenHermes-2.5-Code-290k-13B) | Metric |Value| |---------------------------------|----:| |Avg. |63.33| |AI2 Reasoning Challenge (25-Shot)|57.34| |HellaSwag (10-Shot) |80.48| |MMLU (5-Shot) |56.53| |TruthfulQA (0-shot) |52.50| |Winogrande (5-shot) |74.82| |GSM8k (5-shot) |58.30|
jtaj/donut-base-sroie
jtaj
2024-03-17T17:31:47Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-03-17T04:24:37Z
--- license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.0.0+cu117 - Datasets 2.18.0 - Tokenizers 0.15.0
hemantpande/test
hemantpande
2024-03-17T17:30:48Z
0
0
fastai
[ "fastai", "question-answering", "en", "dataset:microsoft/orca-math-word-problems-200k", "license:apache-2.0", "region:us" ]
question-answering
2024-03-17T17:28:31Z
--- license: apache-2.0 datasets: - microsoft/orca-math-word-problems-200k language: - en library_name: fastai pipeline_tag: question-answering ---
AathifMohammed/longt5
AathifMohammed
2024-03-17T17:26:02Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:adapter:google/flan-t5-base", "license:apache-2.0", "region:us" ]
null
2024-03-17T07:44:18Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - rouge base_model: google/flan-t5-base model-index: - name: longt5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longt5 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8770 - Rouge1: 8.7519 - Rouge2: 4.0961 - Rougel: 7.4152 - Rougelsum: 8.2583 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.1398 | 1.0 | 1029 | 1.9004 | 8.6206 | 4.0658 | 7.3531 | 8.1538 | 19.0 | | 2.0706 | 2.0 | 2058 | 1.8770 | 8.7519 | 4.0961 | 7.4152 | 8.2583 | 19.0 | ### Framework versions - PEFT 0.9.1.dev0 - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
gdevakumar/mistral-7b-qlora-finance-alpaca
gdevakumar
2024-03-17T17:17:24Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-17T17:16:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Gordon119/TAT_TD-openai-whisper-large-v2-mix-tag_v1-epoch5-total5epoch
Gordon119
2024-03-17T17:09:36Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-17T17:09:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Balab2021/phi2-chat-g
Balab2021
2024-03-17T17:03:32Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Cricket", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-05T20:11:22Z
--- library_name: transformers tags: - Cricket --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dhanesh123in/tst-summarization
dhanesh123in
2024-03-17T17:01:13Z
3
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-03-17T14:00:12Z
--- license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer model-index: - name: tst-summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tst-summarization This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.0 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
mond45/my-clf
mond45
2024-03-17T16:57:36Z
5
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:avsolatorio/GIST-large-Embedding-v0", "base_model:finetune:avsolatorio/GIST-large-Embedding-v0", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-09T20:35:41Z
--- license: mit base_model: avsolatorio/GIST-large-Embedding-v0 tags: - generated_from_trainer model-index: - name: my-clf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-clf This model is a fine-tuned version of [avsolatorio/GIST-large-Embedding-v0](https://huggingface.co/avsolatorio/GIST-large-Embedding-v0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.445e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 35 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.0+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2
HiImKing1509/anime-sdxl-v3-180imgs-2000steps
HiImKing1509
2024-03-17T16:49:06Z
4
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:cagliostrolab/animagine-xl-3.0", "base_model:adapter:cagliostrolab/animagine-xl-3.0", "license:openrail++", "region:us" ]
text-to-image
2024-03-17T15:38:16Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: cagliostrolab/animagine-xl-3.0 instance_prompt: a Kenji man widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - HiImKing1509/anime-sdxl-v3-180imgs-2000steps <Gallery /> ## Model description These are HiImKing1509/anime-sdxl-v3-180imgs-2000steps LoRA adaption weights for cagliostrolab/animagine-xl-3.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: None. ## Trigger words You should use a Kenji man to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](HiImKing1509/anime-sdxl-v3-180imgs-2000steps/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
rishikasrinivas/bert-genre-pred
rishikasrinivas
2024-03-17T16:46:53Z
9
0
transformers
[ "transformers", "safetensors", "text-classification", "en", "endpoints_compatible", "region:us" ]
text-classification
2024-03-17T15:23:49Z
--- language: - en metrics: - accuracy pipeline_tag: text-classification ---
Replete-AI/Phi-Stoma
Replete-AI
2024-03-17T16:44:24Z
49
2
transformers
[ "transformers", "safetensors", "phi", "text-generation", "conversational", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-12T21:46:12Z
--- license: mit language: - en thumbnail: "https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/frA6uuzjfyLWgo2JE3IoA.png" --- # You answer nature's call? ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/frA6uuzjfyLWgo2JE3IoA.png) ### Named after the method used to create it, interleaving the layers of its predecessor to become far larger, giving it much more potential. [Stoma](https://wowpedia.fandom.com/wiki/Stoma_the_Ancient) was an ancient treeant of lore, and I couldn't think of a better naming convention for a model that was created using the passthrough method. By concatenating layers from different LLMs, it can produce models with an exotic number of parameters (e.g., 9B with two 7B parameter models). These models are often referred to as "frankenmerges" or "Frankenstein models" by the community. Many thanks to [Abacaj](https://huggingface.co/abacaj) for providing the [fine tuned weights](https://huggingface.co/abacaj/phi-2-super) that were used in the creation of this base model. You can find the full script for how the model was merged [here](https://huggingface.co/Kquant03/Phi-Stoma/blob/main/mergekit_config.yml)...thanks to [KatyTheCutie](https://huggingface.co/KatyTheCutie) for inspring me to test out this script. ## This idea was brought to me by [The Face of Goonery](https://huggingface.co/The-Face-Of-Goonery), also known as Caleb Morgan. I have him to thank if fine-tuning this model turns out to be a success...he also helped me to make this model even larger than the prior one. # How to run inference: ```python import transformers import torch if __name__ == "__main__": model_name = "Replete-AI/Phi-Stoma" tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) model = ( transformers.AutoModelForCausalLM.from_pretrained( model_name, ) .to("cuda:0") .eval() ) messages = [ {"role": "user", "content": "Hello, who are you?"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) input_ids_cutoff = inputs.size(dim=1) with torch.no_grad(): generated_ids = model.generate( input_ids=inputs, use_cache=True, max_new_tokens=512, temperature=0.2, top_p=0.95, do_sample=True, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, ) completion = tokenizer.decode( generated_ids[0][input_ids_cutoff:], skip_special_tokens=True, ) print(completion) ``` # Chat template The model uses the same chat template as found in Mistral instruct models: # [Join the Replete AI Discord here!](https://discord.gg/tG5aY4EX4T)
wywang/ppo-LunarLander-v2-trained-with-wind-env
wywang
2024-03-17T16:43:31Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-03-17T16:37:14Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 203.02 +/- 112.78 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
bartowski/NeuralKybalion-7B-slerp-v3-exl2
bartowski
2024-03-17T16:37:40Z
5
2
null
[ "merge", "mergekit", "lazymergekit", "Kukedlc/NeuralKybalion-7B-slerp", "Kukedlc/NeuralKybalion-7B-slerp-v2", "rwitz/experiment26-truthy-iter-0", "text-generation", "base_model:Kukedlc/NeuralKybalion-7B-slerp", "base_model:merge:Kukedlc/NeuralKybalion-7B-slerp", "base_model:Kukedlc/NeuralKybalion-7B-slerp-v2", "base_model:merge:Kukedlc/NeuralKybalion-7B-slerp-v2", "license:apache-2.0", "region:us" ]
text-generation
2024-03-17T16:19:24Z
--- tags: - merge - mergekit - lazymergekit - Kukedlc/NeuralKybalion-7B-slerp - Kukedlc/NeuralKybalion-7B-slerp-v2 - rwitz/experiment26-truthy-iter-0 base_model: - Kukedlc/NeuralKybalion-7B-slerp - Kukedlc/NeuralKybalion-7B-slerp-v2 - rwitz/experiment26-truthy-iter-0 license: apache-2.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of NeuralKybalion-7B-slerp-v3 Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.15">turboderp's ExLlamaV2 v0.0.15</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Kukedlc/NeuralKybalion-7B-slerp-v3 | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/NeuralKybalion-7B-slerp-v3-exl2 NeuralKybalion-7B-slerp-v3-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `NeuralKybalion-7B-slerp-v3-exl2`: ```shell mkdir NeuralKybalion-7B-slerp-v3-exl2 huggingface-cli download bartowski/NeuralKybalion-7B-slerp-v3-exl2 --local-dir NeuralKybalion-7B-slerp-v3-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir NeuralKybalion-7B-slerp-v3-exl2-6_5 huggingface-cli download bartowski/NeuralKybalion-7B-slerp-v3-exl2 --revision 6_5 --local-dir NeuralKybalion-7B-slerp-v3-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir NeuralKybalion-7B-slerp-v3-exl2-6.5 huggingface-cli download bartowski/NeuralKybalion-7B-slerp-v3-exl2 --revision 6_5 --local-dir NeuralKybalion-7B-slerp-v3-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
somosnlp/gemma-FULL-RAC-Colombia
somosnlp
2024-03-17T16:35:33Z
10
15
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "LLMs-Aviation", "AI-Regulatory-Compliance", "RAC-AI-Colombia", "conversational", "es", "dataset:somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T13:01:44Z
--- library_name: transformers tags: - LLMs-Aviation - AI-Regulatory-Compliance - RAC-AI-Colombia license: apache-2.0 datasets: - somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma language: - es widget: - text: | <bos><start_of_turn>system You are a helpful AI assistant. Responde en formato json. Eres un agente experto en la normativa aeronautica Colombiana.<end_of_turn> <start_of_turn>user ¿Qué sucede con las empresas de servicios aéreos comerciales que no hayan actualizado su permiso de operación después del 31 de marzo de 2024?<end_of_turn> <start_of_turn>model --- # Model Card for Gemma-FULL-RAC-Colombia 🛫 <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6419c2f6b4adb0e101b17b6c/JlVz94BKdUkRN8xU00EgT.png" alt="Model Illustration" width="500"> </p> 🌟 This model card proudly presents Gemma-FULL-RAC-Colombia, a groundbreaking language model fine-tuned with precision on the Colombian Aeronautical Regulations (Reglamento Aeronáutico Colombiano, RAC) dataset. Harnessing the advanced capabilities of Google's original Gemma model, this adaptation focuses keenly on the aviation regulatory sphere within Colombia. ## Table of Contents - [Model Card for Gemma-FULL-RAC-Colombia 🛫](#model-card-for-gemma-full-rac-colombia-) * [Model Details 📈](#model-details-) + [Model Description 📘](#model-description-) * [Uses 🛠️](#uses-) + [Direct Use 🎯](#direct-use-) + [Downstream Use [optional] 🔄](#downstream-use-optional-) + [Out-of-Scope Use 🚫](#out-of-scope-use-) * [Bias, Risks, and Limitations ⚠️](#bias-risks-and-limitations-) + [Recommendations 📝](#recommendations-) * [How to Get Started with the Model 🚀](#how-to-get-started-with-the-model-) * [Training Details 🧠](#training-details-) + [Environment and Libraries 🖥️](#environment-and-libraries-) + [Model Architecture 🏗️](#model-architecture-) + [Fine-tuning with LoRA 🔧](#fine-tuning-with-lora-) + [Training Process 🚀](#training-process-) + [Training Data 📚](#training-data-) + [Training Procedure](#training-procedure) * [Environmental Impact](#environmental-impact) ## Model Details 📈 ### Model Description 📘 🚀 The Gemma-FULL-RAC-Colombia model marks an innovative leap in deploying language models for niche domains like aviation law and regulation. Developed by Edison Bejarano and Nicolai Potes, it's finely tuned using LoRA (Low-Rank Adaptation), ensuring minimal adjustment to the model's weights while preserving the extensive knowledge and prowess of Google's Gemma. This meticulous approach guarantees not just the retention of broad language comprehension and generation skills but also a profound grasp of the terminologies, concepts, and regulatory frameworks particular to Colombian aviation. - **Developed by:** [Edison Bejarano](https://huggingface.co/ejbejaranos) and [Nicolai Potes](https://huggingface.co/NickyNicky) ✨ - **Funded by:** Fundación Universitaria Los Libertadores 🏛️ - **Shared by [optional]:** This project is a collective endeavor of passionate NLP and aviation regulation specialists, united by a vision to make aviation laws more accessible and understandable. 👥 - **Model type:** Causal Language Model, specially fine-tuned with LoRA for the distinct area of Colombian Aeronautical Regulations. 📊 - **Language(s) (NLP):** Spanish, tailored for the Colombian legal and regulatory context. 🇨🇴 - **License:** Apache License. This open-source license ensures that the model can be freely used, modified, and distributed. Please check the Hugging Face model page for specific licensing details. 🔓 - **Finetuned from model [optional]:** Derived from Google's Gemma model, utilizing versions such as `gemma-2b-it` for comprehensive training. 🧠 ### Loss Function through Epochs 📉 <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6419c2f6b4adb0e101b17b6c/vufJ_u_5ZrdzNcSrqwjBx.png" alt="Loss Function Graph" width="500"> </p> ## Uses 🛠️ The Gemma-FULL-RAC-Colombia model is designed to enhance the understanding and application of the Colombian Aeronautical Regulations (RAC) through natural language processing. It's tailored for professionals and enthusiasts in the aviation industry, regulatory agencies, legal experts, and AI researchers with an interest in domain-specific language model applications. ### Direct Use 🎯 The model can be directly used to generate text, answer questions, and provide insights related to the Colombian Aeronautical Regulations without further fine-tuning. It's ideal for creating educational content, simplifying legal language, and assisting in regulatory compliance efforts. ### Downstream Use [optional] 🔄 When fine-tuned, Gemma-FULL-RAC-Colombia can be integrated into larger systems for automated compliance checks, document summarization, and even training simulators for pilots and air traffic controllers, offering a deeper, contextual understanding of regulations. ### Out-of-Scope Use 🚫 Misuse includes any application that promotes unethical practices, misinterprets aviation law, or uses the model for malicious purposes. The model is not designed for navigational purposes or to replace professional legal advice. ## Bias, Risks, and Limitations ⚠️ The model, while powerful, has limitations inherent to AI, including biases present in the training data. It may not cover all nuances of aviation regulations outside of Colombia or adapt to changes in law without updates. ### Recommendations 📝 Users should verify model outputs against current regulations and consult with professionals for critical applications. Awareness of the model's scope and limitations is crucial for effective use. ## How to Get Started with the Model 🚀 ```python # Assuming Hugging Face's Transformers and Datasets are installed from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "somosnlp/gemma-FULL-RAC-Colombia" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example usage input_text = "Explain the regulations for drone flights in Colombia." input_ids = tokenizer.encode(input_text, return_tensors="pt") output = model.generate(input_ids, max_length=100) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## Training Details 🧠 The Gemma-FULL-RAC-Colombia model was specifically fine-tuned on the "Reglamento Aeronáutico Colombiano FULL ChatML format gemma" dataset available through Hugging Face Datasets. This dataset encompasses a comprehensive collection of the Colombian Aeronautical Regulations (RAC), tailored to train the model for understanding and generating responses related to aviation regulations in Colombia. ### Environment and Libraries 🖥️ The training was executed in a Python environment leveraging the following main libraries: - `transformers` for model loading and fine-tuning facilities. - `datasets` for handling the dataset efficiently. - `torch` for the underlying deep learning framework. - `peft` for applying the Low-Rank Adaptation (LoRA) adjustments to the model. ### Model Architecture 🏗️ The base model is Google's Gemma, an advanced language model with billions of parameters, capable of understanding context and generating human-like text. For this project, specific versions of Gemma (e.g., `gemma-2b-it`) were used as starting points. ### Fine-tuning with LoRA 🔧 LoRA (Low-Rank Adaptation) was employed to adapt the Gemma model to the specific domain of Colombian Aeronautical Regulations without extensively retraining the entire model. This approach allows for the efficient update of the model's weights, focusing on the adaptation layers to integrate the domain-specific knowledge effectively. - **LoRA Configurations:** r=64, lora_alpha=32, among other settings tailored to balance performance and efficiency. ### Training Process 🚀 Training was carried out considering the following key parameters: - Batch size: Adapted based on the available computational resources. - Learning rate: Fine-tuned for optimal convergence without overfitting. - Epochs: The model was trained for a sufficient number of epochs to ensure that it adequately learned from the dataset, with early stopping criteria to prevent overfitting. ### Evaluation and Performance 📊 '''Pending''' ### Model Merging and Saving 💾 After fine-tuning, the LoRA-adjusted weights were merged back with the base Gemma model to create the final version of Gemma-FULL-RAC-Colombia. The model was then saved and made available through Hugging Face, ensuring easy access for further use and development. The training process was meticulously documented to ensure reproducibility and facilitate further research and development in the domain of specialized language models for regulatory compliance and legal tech applications. ### Training Data 📚 The training data utilized for fine-tuning the Gemma-FULL-RAC-Colombia model is available on Hugging Face Datasets under the dataset identifier: [somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma](https://huggingface.co/datasets/somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma) This dataset comprises a targeted collection of the Colombian Aeronautical Regulations (RAC), specifically including regulations from RAC 1 to RAC 5. The selection is optimized for chat and language modeling tasks, aiming to train the model to understand and generate accurate responses related to the initial segments of Colombian aviation regulations. ### Training Procedure The Gemma-FULL-RAC-Colombia model underwent a meticulous training process, leveraging the NVIDIA V100 GPU for its computational needs. The procedure was designed to fine-tune the model on a dataset comprising sections RAC 1 to RAC 5 of the Colombian Aeronautical Regulations. #### Preprocessing The dataset was preprocessed to convert the regulatory texts into a format suitable for the model, utilizing the tokenizer associated with the `google/gemma-2b-it` model. This step ensured that the input data was optimized for training efficiency and model comprehension. #### Training Hyperparameters The training utilized a mixed precision regime to balance computational efficiency and model accuracy. Specific hyperparameters included: - **Training regime:** Mixed precision (FP16) #### Speeds, Sizes, Times The model was trained over a span of approximately 4.67 hours (runtime: 16,020 seconds), with checkpoints indicating progress at various intervals. The training demonstrated efficient utilization of the V100 GPU, maintaining a steady throughput. #### Factors Training focused on the model's ability to understand and generate text related to the specific domain of Colombian Aeronautical Regulations. #### Metrics The primary metric for evaluating the model's performance was the training loss, which was observed to decrease steadily over time, indicating effective learning and adaptation by the model. ### Results The training process resulted in a model capable of accurately interpreting and responding to queries related to the Colombian Aeronautical Regulations, with a notable final training loss of 0.6204 at the last checkpoint. ## Environmental Impact Given the use of an NVIDIA V100 GPU for approximately 4.67 hours, the carbon emissions can be estimated using the Machine Learning Impact calculator. This tool accounts for the hardware type, runtime, and other factors to provide a comprehensive view of the environmental impact of training large AI models. - **Hardware Type:** NVIDIA V100 GPU - **Hours used:** ~4.67 - **Carbon Emitted:** 554.5625 # Constants power_consumption_kW = 0.25 # 250 watts in kW runtime_hours = 4.67 carbon_intensity_gCO2eq_per_kWh = 475 # Global average carbon intensity # Calculate carbon emissions carbon_emitted_gCO2eq = power_consumption_kW * runtime_hours * carbon_intensity_gCO2eq_per_kWh carbon_emitted_gCO2eq = 554.5625
Dricz/gun-obj-detection-5
Dricz
2024-03-17T16:33:44Z
4
0
ultralytics
[ "ultralytics", "tensorboard", "v5", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "visdrone", "uav", "license:openrail", "model-index", "region:us" ]
object-detection
2024-03-17T14:21:12Z
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - visdrone - uav library_name: ultralytics library_version: 8.0.43 inference: false model-index: - name: mshamrai/yolov8x-visdrone results: - task: type: object-detection metrics: - type: precision value: 0.47047 name: [email protected](box) license: openrail pipeline_tag: object-detection ---
aLadPat/Taxi-v3
aLadPat
2024-03-17T16:33:15Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-03-17T16:21:58Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="aLadPat/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
oakela/starcoder2_dev_data
oakela
2024-03-17T16:30:31Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "starcoder2", "text-generation", "trl", "sft", "generated_from_trainer", "base_model:bigcode/starcoder2-3b", "base_model:finetune:bigcode/starcoder2-3b", "license:bigcode-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T16:25:32Z
--- license: bigcode-openrail-m base_model: bigcode/starcoder2-3b tags: - trl - sft - generated_from_trainer model-index: - name: starcoder2_dev_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # starcoder2_dev_data This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
RizwanRizwan/gpt2-124M-qlora-chat-support
RizwanRizwan
2024-03-17T16:25:41Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:openai-community/gpt2", "base_model:adapter:openai-community/gpt2", "region:us" ]
null
2024-03-17T16:25:39Z
--- library_name: peft base_model: gpt2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.1.dev0
Goekdeniz-Guelmez/J.O.S.I.E.3-Beta6-slerp-gguf
Goekdeniz-Guelmez
2024-03-17T16:06:07Z
10
0
null
[ "gguf", "merge", "mergekit", "lazymergekit", "Weyaxi/Einstein-v4-7B", "cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser", "base_model:Weyaxi/Einstein-v4-7B", "base_model:merge:Weyaxi/Einstein-v4-7B", "base_model:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser", "base_model:merge:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-17T15:50:55Z
--- tags: - merge - mergekit - lazymergekit - Weyaxi/Einstein-v4-7B - cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser base_model: - Weyaxi/Einstein-v4-7B - cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser --- # JOSIE_Beta-6-7B-slerp JOSIE_Beta-6-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Weyaxi/Einstein-v4-7B](https://huggingface.co/Weyaxi/Einstein-v4-7B) * [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) ## 🧩 Configuration ```yaml slices: - sources: - model: Weyaxi/Einstein-v4-7B layer_range: [0, 32] - model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser layer_range: [0, 32] merge_method: slerp base_model: Weyaxi/Einstein-v4-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Isaak-Carter/JOSIE_Beta-6-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jdorairaj/Bert-uncased-adapter-mnli
jdorairaj
2024-03-17T16:02:28Z
0
0
adapter-transformers
[ "adapter-transformers", "bert", "dataset:mnli", "region:us" ]
null
2024-03-17T16:02:24Z
--- tags: - adapter-transformers - bert datasets: - mnli --- # Adapter `jdorairaj/Bert-uncased-adapter-mnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [mnli](https://huggingface.co/datasets/mnli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("jdorairaj/Bert-uncased-adapter-mnli", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Goekdeniz-Guelmez/J.O.S.I.E.3-Beta5-slerp-gguf
Goekdeniz-Guelmez
2024-03-17T16:01:27Z
33
0
null
[ "gguf", "merge", "mergekit", "lazymergekit", "Weyaxi/Einstein-v4-7B", "Locutusque/Hercules-3.1-Mistral-7B", "base_model:Locutusque/Hercules-3.1-Mistral-7B", "base_model:merge:Locutusque/Hercules-3.1-Mistral-7B", "base_model:Weyaxi/Einstein-v4-7B", "base_model:merge:Weyaxi/Einstein-v4-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2024-03-16T11:46:49Z
--- tags: - merge - mergekit - lazymergekit - Weyaxi/Einstein-v4-7B - Locutusque/Hercules-3.1-Mistral-7B base_model: - Weyaxi/Einstein-v4-7B - Locutusque/Hercules-3.1-Mistral-7B --- # JOSIE_Beta-5-7B-slerp JOSIE_Beta-5-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Weyaxi/Einstein-v4-7B](https://huggingface.co/Weyaxi/Einstein-v4-7B) * [Locutusque/Hercules-3.1-Mistral-7B](https://huggingface.co/Locutusque/Hercules-3.1-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Weyaxi/Einstein-v4-7B layer_range: [0, 32] - model: Locutusque/Hercules-3.1-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: Weyaxi/Einstein-v4-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Isaak-Carter/JOSIE_Beta-5-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
sr5434/swift-codellama-34b-lora
sr5434
2024-03-17T15:58:18Z
0
0
null
[ "safetensors", "code", "license:mit", "region:us" ]
null
2024-03-17T15:56:56Z
--- license: mit tags: - code --- Codellama 34b(base) finetuned on Swift code. I used Stanford CRFM's Levanter and a TPU v4-8 provided by the TensorFlow Research Cloud to build this.
mengdili/sft_openassistant-guanaco
mengdili
2024-03-17T15:56:36Z
4
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:facebook/opt-350m", "base_model:finetune:facebook/opt-350m", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-16T18:58:25Z
--- license: other base_model: facebook/opt-350m tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: sft_openassistant-guanaco results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft_openassistant-guanaco This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 2048 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
milotix/reinforce-cartPole
milotix
2024-03-17T15:48:58Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-17T15:48:48Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-cartPole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
PwNzDust/vit_cub_early_stop
PwNzDust
2024-03-17T15:35:06Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-03-17T14:45:32Z
ON VALIDATION ***** eval metrics ***** epoch = 9.33 eval_accuracy = 0.8349 eval_loss = 0.6872 eval_runtime = 0:00:18.37 eval_samples_per_second = 65.252 eval_steps_per_second = 8.163 ON TEST ***** test metrics ***** epoch = 9.33 eval_accuracy = 0.8274 eval_loss = 0.7111 eval_runtime = 0:01:30.91 eval_samples_per_second = 63.733 eval_steps_per_second = 7.975
Holarissun/vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.4_epoch2-subset7000
Holarissun
2024-03-17T15:33:14Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T15:33:11Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.4_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.4_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
AvitalBY/distilbert-base-uncased-finetuned-squad
AvitalBY
2024-03-17T15:31:58Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-03-14T15:17:07Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.6353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 5.7577 | | No log | 2.0 | 2 | 5.6771 | | No log | 3.0 | 3 | 5.6353 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cpu - Datasets 2.18.0 - Tokenizers 0.15.2
FelixChao/Gemma-10.2B-Coder
FelixChao
2024-03-17T15:25:56Z
47
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T14:24:37Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
itayl/Hebrew-gemma-GGUF
itayl
2024-03-17T15:19:54Z
10
0
null
[ "gguf", "text2text-generation", "he", "en", "endpoints_compatible", "region:us", "conversational" ]
text2text-generation
2024-03-07T19:43:58Z
--- language: - he - en pipeline_tag: text2text-generation --- # Authors: Trained by Yam Peleg. In collaboration with Jonathan Rouach and Arjeo, inc.
itayl/Hebrew-Gemma-11B-V2-mlx-4bit
itayl
2024-03-17T15:19:05Z
3
0
mlx
[ "mlx", "safetensors", "gemma", "he", "en", "region:us" ]
null
2024-03-17T14:33:45Z
--- language: - he - en library_name: mlx widget: - text: '<bos><start_of_turn>user How does the brain work?<end_of_turn> <start_of_turn>model ' inference: parameters: max_new_tokens: 200 extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms --- ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("itayl/Hebrew-Gemma-11B-V2-mlx-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ``` Authors Trained by Yam Peleg. In collaboration with Jonathan Rouach and Arjeo, inc.
Holarissun/vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.4_epoch2-subset7000
Holarissun
2024-03-17T15:18:45Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T15:18:39Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.4_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.4_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
wildzzz/ArtUniverse-SDXL-1.0
wildzzz
2024-03-17T15:17:32Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-03-17T15:17:32Z
--- license: creativeml-openrail-m ---
EbanLee/kobart-summary-v1
EbanLee
2024-03-17T15:15:56Z
10
1
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "summarization", "ko", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2024-03-04T07:17:48Z
--- language: - ko metrics: - rouge library_name: transformers pipeline_tag: summarization tags: - bart --- # kobart-summary - 이 모델은 [kobart모델](https://huggingface.co/ainize/kobart-news)을 [도서자료 요약](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=93) 데이터로 fine-tuning한 모델입니다. 지문을 몇개의 문장들로 요약해줍니다. ## How to use ```python from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration # Load Model and Tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("EbanLee/kobart-summary-v1") model = BartForConditionalGeneration.from_pretrained("EbanLee/kobart-summary-v1") # Encoding input_text = "일반적으로 로컬푸드 농산물은 도매시장 경매가보다 높고 소매가보다 낮은 수준에서 가격이 결정된다. 농가들이 자율적으로 가격을 결정하고 있지만, 시장가격을 참고하기 때문에 대체로 적정한 가격이 설정된다. 문제는 태풍, 폭우, 폭염 또는 공급과잉에 따른 시장가격 등락이 심해질 때이다. 완주군 로컬푸드 협동조합은 위와 같이 외부요인에 의해 농산물 가격에 급등락이 발생했을 때에도 연중 일정 수준의 가격으로 판매되도록 유통 안정 기금을 운용하고 있다. 그리고 농산물 출하자에게 사전에 동의를 구해 일부 관리 품목은 가격 상하한 및 판매량을 조절해 납품하도록 교육하고 있다." input_ids = tokenizer.encode(input_text, return_tensors="pt", padding="max_length", truncation=True, max_length=1026) # Generate Summary Text Ids summary_text_ids = model.generate( input_ids=input_ids, bos_token_id=model.config.bos_token_id, eos_token_id=model.config.eos_token_id, length_penalty=1.5, max_length=256, min_length=12, num_beams=6, repetition_penalty=2.0, ) # Decoding Text Ids print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True)) ```
Owhslp/nous_researcher_tuning_2_81
Owhslp
2024-03-17T15:10:57Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T14:51:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
malikkurosaki/chat
malikkurosaki
2024-03-17T15:09:50Z
0
0
adapter-transformers
[ "adapter-transformers", "legal", "code", "id", "dataset:mad/IndonesiaNewsDataset", "license:mit", "region:us" ]
null
2024-03-17T15:05:00Z
--- license: mit datasets: - mad/IndonesiaNewsDataset language: - id metrics: - accuracy library_name: adapter-transformers tags: - legal - code ---
rk68/phi-1_5-finetuned-aqua-rat-teacher-1k
rk68
2024-03-17T15:06:01Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2024-03-17T15:03:34Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: phi-1_5-finetuned-aqua-rat-teacher-1k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-aqua-rat-teacher-1k This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
taodaotada/695e8ac
taodaotada
2024-03-17T15:05:58Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T14:49:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alfredplpl/suzume-poc
alfredplpl
2024-03-17T15:05:20Z
9
8
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "ja", "en", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-14T09:51:38Z
--- language: - ja - en license: other library_name: transformers license_name: gemma-terms-of-use license_link: https://www.kaggle.com/models/google/gemma/license/consent inference: false --- # はじめに GoogleのGemma-2Bを日本語で使えるように継続事前学習を施した、商用利用可能なベースモデルです。 小型なのでスマホや家電などに向いています。**ただし、Instruction tuningが困難な可能性があります。** [Colabで試す](https://colab.research.google.com/drive/1g4739fY-n4RiS9-BwwiPJI92sRahxGuO?usp=sharing) mmngaさんが作った[軽量版](https://huggingface.co/mmnga/alfredplpl-suzume-poc-gguf)を[Colabで試す](https://colab.research.google.com/drive/18vsNSpWx0Ydp3KYTnUqlChTct_OvuToO?usp=sharing) ![suzume](suzume.webp) # Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("alfredplpl/suzume-poc") model = AutoModelForCausalLM.from_pretrained("alfredplpl/suzume-poc") input_text = """人工知能とは""" input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids,max_new_tokens=64) print(tokenizer.decode(outputs[0])) ``` ## Result ```bash <bos>人工知能とは、「人工生物」に基づいた知能、意図、計画、感情などの概念を構築し、人間のような思考・意思・判断・判断などを備えた機体とする試み。 本稿では、人工知能やAIに関する動向をまとめる。なお、「人工知能」と ``` # Base model - google/gemma-2b # Dataset for continual pretraining - izumi-lab/wikipedia-ja-20230720 # How to make this model [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Ermira/al-roberta1
Ermira
2024-03-17T15:04:47Z
20
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "base_model:macedonizer/al-roberta-base", "base_model:finetune:macedonizer/al-roberta-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-03-17T14:27:44Z
--- license: apache-2.0 base_model: macedonizer/al-roberta-base tags: - generated_from_trainer model-index: - name: al-roberta1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # al-roberta1 This model is a fine-tuned version of [macedonizer/al-roberta-base](https://huggingface.co/macedonizer/al-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0438 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9068 | 1.0 | 5475 | 3.0438 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
sagravela/Reinforce-Pixelcopter-PLE-v0
sagravela
2024-03-17T15:03:28Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-03-17T15:03:25Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 38.60 +/- 32.07 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
vhrvmk/code-llama-7b-text-to-sql
vhrvmk
2024-03-17T14:58:24Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-03-17T14:01:50Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer base_model: codellama/CodeLlama-7b-hf datasets: - generator model-index: - name: code-llama-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-text-to-sql This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.2.1 - Datasets 2.16.1 - Tokenizers 0.15.2
sr5434/function-call-llama-2-13b-lora
sr5434
2024-03-17T14:54:23Z
0
0
null
[ "safetensors", "en", "dataset:rizerphe/glaive-function-calling-v2-llama", "license:mit", "region:us" ]
null
2024-03-17T14:52:17Z
--- license: mit datasets: - rizerphe/glaive-function-calling-v2-llama language: - en --- Llama 2 13b finetuned to call functions. I used Stanford's Levanter library to train the model on a TPU v4-8(courtesty of the TensorFlow Research Cloud)
oakela/finetune_starcoder2_nate_continue_600steps
oakela
2024-03-17T14:53:50Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "starcoder2", "text-generation", "trl", "sft", "generated_from_trainer", "base_model:bigcode/starcoder2-3b", "base_model:finetune:bigcode/starcoder2-3b", "license:bigcode-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-15T15:58:31Z
--- license: bigcode-openrail-m base_model: bigcode/starcoder2-3b tags: - trl - sft - generated_from_trainer model-index: - name: finetune_starcoder2_cleaned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune_starcoder2_cleaned This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - training_steps: 600 ### Training results ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
rumeysacelik/turkishReviews-ds-mini
rumeysacelik
2024-03-17T14:53:39Z
1
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T14:47:36Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback model-index: - name: rumeysacelik/turkishReviews-ds-mini results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # rumeysacelik/turkishReviews-ds-mini This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 9.1695 - Validation Loss: 9.2563 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.2780 | 9.9719 | 0 | | 9.6473 | 9.6273 | 1 | | 9.1695 | 9.2563 | 2 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Tokenizers 0.15.2
rk68/phi-1_5-finetuned-aqua-rat-10k
rk68
2024-03-17T14:50:04Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2024-03-17T14:09:23Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: phi-1_5-finetuned-aqua-rat-10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-finetuned-aqua-rat-10k This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
casque/Baltimore_Fans_Clothing
casque
2024-03-17T14:46:49Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-03-17T14:46:00Z
--- license: creativeml-openrail-m ---
rhysjones/phi-2-orange-v2
rhysjones
2024-03-17T14:44:15Z
236
26
transformers
[ "transformers", "safetensors", "phi", "text-generation", "conversational", "dataset:Open-Orca/SlimOrca-Dedup", "dataset:migtissera/Synthia-v1.3", "dataset:LDJnr/Verified-Camel", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Capybara", "dataset:meta-math/MetaMathQA", "dataset:Intel/orca_dpo_pairs", "dataset:argilla/ultrafeedback-binarized-preferences-cleaned", "license:mit", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T16:21:08Z
--- license: mit datasets: - Open-Orca/SlimOrca-Dedup - migtissera/Synthia-v1.3 - LDJnr/Verified-Camel - LDJnr/Pure-Dove - LDJnr/Capybara - meta-math/MetaMathQA - Intel/orca_dpo_pairs - argilla/ultrafeedback-binarized-preferences-cleaned widget: - example_title: "Example interaction" text: "Why is the sky blue?" inference: parameters: do_sample: True temperature: 0.1 model-index: - name: phi-2-orange-v2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.86 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 76.32 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 55.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.84 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 75.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 57.62 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2 name: Open LLM Leaderboard --- ![Phi-2 Orange](https://huggingface.co/rhysjones/phi-2-orange-v2/resolve/main/phi-2-orange.jpg) # Phi-2 Orange Version 2 A two-step finetune of Phi-2, with a bit more zest. This is an improved version of the original [Phi-2-Orange](https://huggingface.co/rhysjones/phi-2-orange) that uses an updated training process on the same datasets. It also uses the latest updated model from Microsoft's [Phi-2](https://huggingface.co/microsoft/phi-2), making it directly usable within Hugging Face's Transformers library (without the need for trust remote code). # Prompt Format Phi-2 Orange v2 uses ChatML as the prompt format. (Update 12th March 2024: fixed eos_token issue) It's recommended to always prompt with a system instruction (use whatever system prompt you like): ``` <|im_start|>system You are a helpful assistant for Python which outputs in Markdown format.<|im_end|> <|im_start|>user Write a function to calculate the Fibonacci sequence<|im_end|> <|im_start|>assistant ``` For example, if you find the model's output to be overly verbose, instruct it to be short and concise: ``` <|im_start|>system You are a helpful assistant. Be short and direct in your answers.<|im_end|> <|im_start|>user Was Tom Hanks in the movie Forrest Gump? If so, who did he play and give details of the plot.<|im_end|> <|im_start|>assistant ``` # Evaluations [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rhysjones__phi-2-orange-v2) | Metric |Value| |---------------------------------|----:| |Average |63.67| |AI2 Reasoning Challenge (25-Shot)|61.86| |HellaSwag (10-Shot) |76.32| |MMLU (5-Shot) |55.72| |TruthfulQA (0-shot) |54.84| |Winogrande (5-shot) |75.69| |GSM8k (5-shot) |57.62| [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) Evaluation from [mlabonne](https://huggingface.co/mlabonne)'s alternative LLM leaderboard: | Metric |Value| |---------------------------------|----:| |Average |49.64| |AGIEval |34.55| |GPT4All |70.96| |TruthfulQA |54.87| |Bigbench |38.17| # Limitations This model shares the same limitations as the underlying Phi-2 model, details of which are found [here](https://huggingface.co/microsoft/phi-2#limitations-of-phi-2).
nluai/Llama-2-7b-chat-hf-vn
nluai
2024-03-17T14:43:19Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "vi", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:quantized:meta-llama/Llama-2-7b-chat-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-15T05:49:29Z
--- library_name: transformers base_model: meta-llama/Llama-2-7b-chat-hf language: - vi --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.3.dev0
Holarissun/vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.8_epoch2-subset7000
Holarissun
2024-03-17T14:35:37Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T14:35:33Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.8_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.8_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
MinkyuRamen/facebook_bart-base_en2ko_translation_PROMPT_TUNING
MinkyuRamen
2024-03-17T14:34:32Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:facebook/bart-base", "base_model:adapter:facebook/bart-base", "region:us" ]
null
2024-03-17T12:26:31Z
--- library_name: peft base_model: facebook/bart-base --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.9.0
codeart/model
codeart
2024-03-17T14:33:50Z
0
0
allennlp
[ "allennlp", "music", "tr", "license:apache-2.0", "region:us" ]
null
2024-03-17T14:30:53Z
--- license: apache-2.0 language: - tr library_name: allennlp tags: - music ---
taodaotada/tadam2
taodaotada
2024-03-17T14:30:23Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T13:40:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zzttbrdd/sn6_040
zzttbrdd
2024-03-17T14:28:21Z
96
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T14:26:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Holarissun/vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.9_epoch2-subset7000
Holarissun
2024-03-17T14:22:51Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T14:22:46Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.9_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vrep17_weightx2.0_zephyr3b_aisft_gsm8k_rand_alphalinear_beta0.9_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Holarissun/vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.8_epoch2-subset7000
Holarissun
2024-03-17T14:21:55Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T14:21:52Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.8_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.8_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Mantis-VL/mllava_debug_nlvr2_v1_4096
Mantis-VL
2024-03-17T14:17:09Z
5
0
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "base_model:finetune:llava-hf/llava-1.5-7b-hf", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-03-16T23:49:20Z
--- base_model: llava-hf/llava-1.5-7b-hf tags: - generated_from_trainer model-index: - name: mllava_debug_nlvr2_v1_4096 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mllava_debug_nlvr2_v1_4096 This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.37.0 - Pytorch 2.2.1 - Datasets 2.17.1 - Tokenizers 0.15.2
Holarissun/vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.9_epoch2-subset7000
Holarissun
2024-03-17T14:16:53Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T14:16:49Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.9_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.9_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
HiImKing1509/anime-sdxl-v2-180imgs-3000steps-KenjiYumekoSatoshi
HiImKing1509
2024-03-17T14:15:55Z
3
2
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:Linaqruf/animagine-xl-2.0", "base_model:adapter:Linaqruf/animagine-xl-2.0", "license:openrail++", "region:us" ]
text-to-image
2024-03-17T12:48:59Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: Linaqruf/animagine-xl-2.0 instance_prompt: a Kenji man widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - HiImKing1509/anime-sdxl-v2-180imgs-3000steps <Gallery /> ## Model description These are HiImKing1509/anime-sdxl-v2-180imgs-3000steps LoRA adaption weights for Linaqruf/animagine-xl-2.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: None. ## Trigger words You should use a Kenji man to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](HiImKing1509/anime-sdxl-v2-180imgs-3000steps/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
taodaotada/tadam1
taodaotada
2024-03-17T14:13:47Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-14T18:23:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
asafaya/kanarya-750m
asafaya
2024-03-17T14:10:37Z
2,797
9
transformers
[ "transformers", "pytorch", "jax", "safetensors", "gptj", "text-generation", "tr", "dataset:oscar", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T11:24:49Z
--- license: apache-2.0 datasets: - oscar - mc4 language: - tr pipeline_tag: text-generation widget: - text: "Benim adım Zeynep, ve en sevdiğim kitabın adı:" example_title: "Benim adım Zeynep" - text: "Bugünkü yemeğimiz" example_title: "Bugünkü yemeğimiz" --- # Kanarya-750M: Turkish Language Model <img src="https://asafaya.me/images/kanarya.webp" alt="Kanarya Logo" style="width:600px;"/> **Kanarya** is a pre-trained Turkish GPT-J 750M model. Released as part of [Turkish Data Depository](https://tdd.ai/) efforts, the Kanarya family has two versions (Kanarya-2B, Kanarya-0.7B). Kanarya-2B is the larger version and Kanarya-0.7B is the smaller version. Both models are trained on a large-scale Turkish text corpus, filtered from OSCAR and mC4 datasets. The training data is collected from various sources, including news, articles, and websites, to create a diverse and high-quality dataset. The models are trained using a JAX/Flax implementation of the [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax) architecture. The models are only pre-trained and are intended to be fine-tuned on a wide range of Turkish NLP tasks. ## Model Details - Model Name: Kanarya-750M - Model Size: 750M parameters - Training Data: OSCAR, mC4 - Language: Turkish - Layers: 12 - Hidden Size: 2048 - Number of Heads: 16 - Context Size: 2048 - Positional Embeddings: Rotary - Vocabulary Size: 32,768 ## Intended Use This model is only pre-trained on Turkish text data and is intended to be fine-tuned on a wide range of Turkish NLP tasks. The model can be used for various Turkish NLP tasks, including text generation, translation, summarization, and other Turkish NLP tasks. This model is not intended to be used for any downstream tasks without fine-tuning. ## Limitations and Ethical Considerations The model is trained on a diverse and high-quality Turkish text corpus, but it may still generate toxic, biased, or unethical content. It is highly recommended to use the model responsibly and make sure that the generated content is appropriate for the use case. Please use the model responsibly and report any issues. ## License: Apache 2.0 The model is licensed under the Apache 2.0 License. It is free to use for any purpose, including commercial use. We encourage users to contribute to the model and report any issues. However, the model is provided "as is" without warranty of any kind. ## Citation If you use the model, please cite the following paper: ```bibtex @inproceedings{safaya-etal-2022-mukayese, title = "Mukayese: {T}urkish {NLP} Strikes Back", author = "Safaya, Ali and Kurtulu{\c{s}}, Emirhan and Goktogan, Arda and Yuret, Deniz", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.69", doi = "10.18653/v1/2022.findings-acl.69", pages = "846--863", } ``` ## Acknowledgments During this work, Ali Safaya was supported by [KUIS AI Center](https://ai.ku.edu.tr/) fellowship. Moreover, the pre-training of these models were performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center ([TRUBA](https://www.truba.gov.tr/index.php/en/main-page/) resources).
Chuanming/Gemma_lora_model
Chuanming
2024-03-17T14:03:17Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:finetune:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-17T14:02:56Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** Chuanming - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
wzChen/my_awesome_wnut_model
wzChen
2024-03-17T14:01:00Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "dataset:wnut_17", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-03-17T13:43:26Z
--- license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 config: wnut_17 split: test args: wnut_17 metrics: - name: Precision type: precision value: 0.558252427184466 - name: Recall type: recall value: 0.4263206672845227 - name: F1 type: f1 value: 0.48344718864950076 - name: Accuracy type: accuracy value: 0.9477576845795391 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.4207 - Precision: 0.5583 - Recall: 0.4263 - F1: 0.4834 - Accuracy: 0.9478 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.3267 | 0.5351 | 0.4235 | 0.4728 | 0.9472 | | No log | 2.0 | 426 | 0.3741 | 0.4730 | 0.3818 | 0.4226 | 0.9428 | | 0.0126 | 3.0 | 639 | 0.3431 | 0.5336 | 0.4189 | 0.4694 | 0.9466 | | 0.0126 | 4.0 | 852 | 0.3790 | 0.5983 | 0.3920 | 0.4737 | 0.9477 | | 0.008 | 5.0 | 1065 | 0.3610 | 0.5289 | 0.4328 | 0.4760 | 0.9472 | | 0.008 | 6.0 | 1278 | 0.3580 | 0.5637 | 0.4347 | 0.4908 | 0.9477 | | 0.008 | 7.0 | 1491 | 0.3569 | 0.5339 | 0.4458 | 0.4859 | 0.9474 | | 0.0049 | 8.0 | 1704 | 0.3988 | 0.5602 | 0.4013 | 0.4676 | 0.9470 | | 0.0049 | 9.0 | 1917 | 0.4180 | 0.5901 | 0.3976 | 0.4751 | 0.9471 | | 0.0032 | 10.0 | 2130 | 0.3969 | 0.5320 | 0.4161 | 0.4670 | 0.9468 | | 0.0032 | 11.0 | 2343 | 0.4265 | 0.5851 | 0.4013 | 0.4761 | 0.9473 | | 0.003 | 12.0 | 2556 | 0.4003 | 0.5569 | 0.4263 | 0.4829 | 0.9475 | | 0.003 | 13.0 | 2769 | 0.4234 | 0.5936 | 0.3967 | 0.4756 | 0.9480 | | 0.003 | 14.0 | 2982 | 0.4016 | 0.5482 | 0.4272 | 0.4802 | 0.9482 | | 0.002 | 15.0 | 3195 | 0.4312 | 0.5655 | 0.4041 | 0.4714 | 0.9471 | | 0.002 | 16.0 | 3408 | 0.4310 | 0.5611 | 0.4087 | 0.4729 | 0.9470 | | 0.0014 | 17.0 | 3621 | 0.4287 | 0.5556 | 0.4124 | 0.4734 | 0.9471 | | 0.0014 | 18.0 | 3834 | 0.4193 | 0.5572 | 0.4198 | 0.4789 | 0.9475 | | 0.0014 | 19.0 | 4047 | 0.4188 | 0.5583 | 0.4263 | 0.4834 | 0.9478 | | 0.0014 | 20.0 | 4260 | 0.4207 | 0.5583 | 0.4263 | 0.4834 | 0.9478 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
iulik-pisik/vreme_model_base
iulik-pisik
2024-03-17T13:53:30Z
76
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ro", "dataset:iulik-pisik/audio_vreme", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-16T18:55:59Z
--- language: - ro license: apache-2.0 base_model: openai/whisper-base tags: - hf-asr-leaderboard - generated_from_trainer datasets: - iulik-pisik/audio_vreme metrics: - wer model-index: - name: Whisper Base Romanian - Vreme results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Vreme ProTv type: iulik-pisik/audio_vreme config: default split: None args: 'config: ro, split: test' metrics: - name: Wer type: wer value: 11.570078092889437 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Base Romanian - Vreme This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Vreme ProTv dataset. It achieves the following results on the evaluation set: - Loss: 0.2275 - Wer: 11.5701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0284 | 6.02 | 1000 | 0.1553 | 12.2483 | | 0.0013 | 12.05 | 2000 | 0.2104 | 11.8783 | | 0.0006 | 18.07 | 3000 | 0.2232 | 11.5290 | | 0.0004 | 24.1 | 4000 | 0.2275 | 11.5701 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
iulik-pisik/vreme_model_tiny
iulik-pisik
2024-03-17T13:52:30Z
76
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ro", "dataset:iulik-pisik/audio_vreme", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-03-16T16:15:08Z
--- language: - ro license: apache-2.0 base_model: openai/whisper-tiny tags: - hf-asr-leaderboard - generated_from_trainer datasets: - iulik-pisik/audio_vreme metrics: - wer model-index: - name: Whisper Tiny Romanian - Vreme results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Vreme ProTv type: iulik-pisik/audio_vreme config: default split: None args: 'config: ro, split: test' metrics: - name: Wer type: wer value: 13.912864775996711 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny Romanian - Vreme This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Vreme ProTv dataset. It achieves the following results on the evaluation set: - Loss: 0.2668 - Wer: 13.9129 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0494 | 6.02 | 1000 | 0.1717 | 14.1184 | | 0.0033 | 12.05 | 2000 | 0.2362 | 14.3855 | | 0.0011 | 18.07 | 3000 | 0.2596 | 13.9540 | | 0.0008 | 24.1 | 4000 | 0.2668 | 13.9129 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
kazssym/stablelm-3b-4e1t-onnx
kazssym
2024-03-17T13:49:26Z
6
0
transformers
[ "transformers", "onnx", "stablelm", "text-generation", "causal-lm", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-24T14:23:36Z
--- license: cc-by-sa-4.0 tags: - causal-lm --- # Description This is an ONNX model for [our test scripts](https://github.com/kazssym/stablelm-study-2). It was exported from [stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t) to ONNX with a [modified version](https://github.com/huggingface/optimum/pull/1719) of Hugging Face Optimum. It is quite possible to have problems. # Exporting This model was exported with [our test scripts](https://github.com/kazssym/stablelm-study-2). The source model requires [Transformers](https://github.com/huggingface/transformers) 4.38 or later to export.
Telugu-LLM-Labs/Telugu-gemma-7b-finetuned-sft
Telugu-LLM-Labs
2024-03-17T13:45:29Z
0
14
transformers
[ "transformers", "safetensors", "text-generation", "te", "en", "dataset:Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized", "dataset:Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized", "base_model:google/gemma-7b", "base_model:finetune:google/gemma-7b", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T03:42:01Z
--- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms base_model: google/gemma-7b datasets: - Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized - Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized language: - te - en library_name: transformers pipeline_tag: text-generation --- # Telugu-gemma-7b-finetuned-sft This model is based on [google/gemma-7b](https://huggingface.co/google/gemma-7b) and hase been LoRA finetuned on instruction datasets: 1. [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized) 2. [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized) The model is finetuned using [unsloth](https://github.com/unslothai/unsloth) library and we provide inference code using the same for faster inference. Alternatively you can use HuggingFace Library for inference. The model is finetuned only on native telugu SFT data from above datasets and we will update the model with transliteration in upcoming days. # Installation `!pip install "unsloth[colab-ampere] @git+https://github.com/unslothai/unsloth.git"` # Input Text Format ``` ### Instruction: {instruction} ### Input: {input} ## Response: {response} ``` # Inference With Unsloth ```python3 from unsloth import FastLanguageModel import torch max_seq_length = 2048 dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = False model, tokenizer = FastLanguageModel.from_pretrained( model_name = "Telugu-LLM-Labs/Telugu-gemma-7b-finetuned-sft", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, device_map="auto" ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "కింది వచనాన్ని రెండు పాయింట్లలో సంగ్రహించండి.", # instruction "Google వార్తలు అనేది Google ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ. ఇది వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్‌ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్‌లను అందిస్తుంది. Google వార్తలు Android, iOS మరియు వెబ్‌లో యాప్‌గా అందుబాటులో ఉన్నాయి. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్‌ను మరియు జనవరి 2006లో అధికారిక యాప్‌ను విడుదల చేసింది.", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs) ``` # Inference with HuggingFace ```python3 from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer model = AutoPeftModelForCausalLM.from_pretrained( "Telugu-LLM-Labs/Telugu-gemma-7b-finetuned-sft", load_in_4bit = False, token = hf_token ) tokenizer = AutoTokenizer.from_pretrained("Telugu-LLM-Labs/Telugu-gemma-7b-finetuned-sft") input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "కింది వచనాన్ని రెండు పాయింట్లలో సంగ్రహించండి.", # instruction "Google వార్తలు అనేది Google ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ. ఇది వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్‌ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్‌లను అందిస్తుంది. Google వార్తలు Android, iOS మరియు వెబ్‌లో యాప్‌గా అందుబాటులో ఉన్నాయి. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్‌ను మరియు జనవరి 2006లో అధికారిక యాప్‌ను విడుదల చేసింది.", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs)[0] ``` # Sample Questions and Responses ## Reasoning ### Instruction: (5+4)*(2+1)? నాకు దశల వారీ వివరణ ఇవ్వండి. ### Response: 1. పారెంథసిస్ లోపల వ్యక్తీకరణలను పరిష్కరించండి: (5+4) మరియు (2+1). 2. పరిష్కారాలను గుణించండి: (5+4) * (2+1). 3. ఫలితాన్ని లెక్కించండి: 9 * 3. 4. తుది సమాధానాన్ని పొందడానికి 9ని 3 తో గుణించండి: 27. ## Content Generation ### Instruction: కాకి మీద చాలా చిన్న కథ రాయండి ### Response: ఒకప్పుడు ఒక పల్లెటూరులో అందమైన, పచ్చని పొలాల మధ్య ఒక పెద్ద కాకి ఉండేది. అక్కడి పిల్లలు తరచూ ఆకాశంలోకి ఎగురుతూ, పైకి ఎగురుతూ కనిపించేవారు. ఒక రోజు, పల్లెటూరులోని ఒక పిల్లవాడు అక్కడికి వచ్చి, కాకిని చూసి ఆశ్చర్యపోయాడు. "కాకి, మీరు అందమైన పక్షి, మీరు ఎక్కడ నుండి వచ్చారు?" అని అడిగాడు. కాకి కిటికీలోకి దూకింది, తన పక్కనే పడుకుంది మరియు "నాకు ఇక్కడ ఎవరికీ తెలియదు. నాకు ఇక్కడ కూడా ఇష్టం." కాకికి పిల్లవాడు ఆశ్చర్యపోయాడు మరియు "నాకు ఇక్కడ కూడా ఇష్టం. మీరు అందమైన పక్షి, మీరు ఇక్కడికి వచ్చినందుకు నాకు సంతోషంగా ఉంది." కాకి కిటికీలోకి దూకింది మరియు పిల్లవాడు వెంటనే పాట వింటాడు. "నాకు ఇక్కడ ఇష్టం. నాకు ఇక్కడికి వచ్చినందుకు ధన్యవాదాలు." అప్పటి నుండి, పిల్లవాడు తరచూ కాకిని చూడవచ్చు, అది పచ్చని పొలాల పైన ఎగురుతుంది. కానీ అది ఎక్కడి నుంచి వచ్చిందో ఎప్పుడూ అర్థం కాలేదు. ## Question Answering with Context ### Instruction: స్నోఫ్లేక్ సీఈవో ఎవరు? ### Input: స్నోఫ్లేక్ (NYSE: SNOW), డేటా క్లౌడ్ కంపెనీ, ఫ్రాంక్ స్లూట్‌మన్ చీఫ్ ఎగ్జిక్యూటివ్ ఆఫీసర్‌గా పదవీ విరమణ చేయాలని నిర్ణయించుకున్నారని మరియు శ్రీధర్ రామస్వామి చీఫ్ ఎగ్జిక్యూటివ్ ఆఫీసర్‌గా మరియు డైరెక్టర్ల బోర్డు సభ్యునిగా వెంటనే అమలులోకి వచ్చినట్లు ఈరోజు ప్రకటించింది. శ్రీధర్ రామస్వామి గతంలో స్నోఫ్లేక్‌లో AI సీనియర్ వైస్ ప్రెసిడెంట్‌గా ఉన్నారు. ### Response: స్నోఫ్లేక్ సీఈవో (క్రింది పదవిలో) శ్రీధర్ రామస్వామి ## General Question Answering ### Instruction: Share the strategy to grow audience on Twitter in 3 short points? ### Response: 1. **కంటెంట్ సృష్టి మరియు పంపిణీని పెంపొందించండి:** మీరు మీ ప్రేక్షకులకు విలువను అందించే అధిక-నాణ్యత, సంబంధిత మరియు ఆకర్షణీయమైన కంటెంట్ను సృష్టించడం ద్వారా ప్రారంభించండి. మీరు మీ కంటెంట్ను పంచుకునే విధానం గురించి కూడా చాలా అవగాహన పొందాలి. మీరు మీ కంటెంట్ను పంచుకోవడానికి అనుకూలమైన పద్ధతులలో ట్విట్టర్ లైవ్, వీడియోలు, ఇన్ఫోగ్రాఫిక్స్ మరియు మరెన్నో ఉన్నాయి. 2. **ఇతరులతో నిమగ్నం అవ్వండి:** ఇతరులతో నిమగ్నం కావడం అంటే వ్యాఖ్యలు మరియు సందేశాలకు ప్రతిస్పందించడం, ఇతర ట్విట్టర్ వినియోగదారులతో సహకరించడం మరియు మీ పరిధిని పెంచడానికి ఇతర ప్రసిద్ధ ట్విట్టర్ ఖాతాలను పంచుకోవడం. ఇది సంభావ్య అనుచరులను ఆకర్షించడానికి మరియు మీ ప్రస్తుత అనుచరులను నిలుపుకోవటానికి సహాయపడుతుంది. 3. **మీ ప్రేక్షకులను విశ్లేషించండి:** మీ ట్విట్టర్ ఖాతాను పర్యవేక్షించడానికి మరియు మీ పోస్ట్ ల పనితీరును ట్రాక్ చేయడానికి ట్విట్టర్ యొక్క విశ్లేషణ సాధనాలను ఉపయోగించండి. ఇది మీ కంటెంట్ను మెరుగుపరచడానికి మరియు మీ ప్రేక్షకులను మరింత సమర్థవంతంగా లక్ష్యంగా చేసుకోవడానికి మీకు సహాయపడుతుంది. మీ పోస్ట్ ల పనితీరును పర్యవేక్షించడం ద్వారా మరియు అత్యంత పనితీరు కనబరిచిన కంటెంట్ ను పునరుత్పత్తి చేయడం ద్వారా, మీరు మీ ఆన్ లైన్ ఉనికిని పెంచుకోవచ్చు మరియు మీ ప్రేక్షకులను పెంచుకోవచ్చు. ------------------------------------------------------------------------------------------------------------------------------------ # Developers: The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.
Telugu-LLM-Labs/Telugu-gemma-2b-finetuned-sft
Telugu-LLM-Labs
2024-03-17T13:44:45Z
0
3
transformers
[ "transformers", "safetensors", "text-generation", "te", "en", "dataset:Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized", "dataset:Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-02-29T07:19:07Z
--- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms base_model: google/gemma-2b datasets: - Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized - Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized language: - te - en library_name: transformers pipeline_tag: text-generation --- # Telugu-gemma-2b-finetuned-sft This model is based on [google/gemma-2b](https://huggingface.co/google/gemma-2b) and hase been LoRA finetuned on instruction datasets: 1. [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized) 2. [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized) The model is finetuned using [unsloth](https://github.com/unslothai/unsloth) library and we provide inference code using the same for faster inference. Alternatively you can use HuggingFace Library for inference. The model is finetuned only on native telugu SFT data from above datasets and we will update the model with transliteration in upcoming days. # Installation `!pip install "unsloth[colab-ampere] @git+https://github.com/unslothai/unsloth.git"` # Input Text Format ``` ### Instruction: {instruction} ### Input: {input} ## Response: {response} ``` # Inference with Unsloth ```python3 from unsloth import FastLanguageModel import torch max_seq_length = 2048 dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = False model, tokenizer = FastLanguageModel.from_pretrained( model_name = "Telugu-LLM-Labs/Telugu-gemma-2b-finetuned-sft", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, device_map="auto" ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "కింది వచనాన్ని రెండు పాయింట్లలో సంగ్రహించండి.", # instruction "Google వార్తలు అనేది Google ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ. ఇది వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్‌ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్‌లను అందిస్తుంది. Google వార్తలు Android, iOS మరియు వెబ్‌లో యాప్‌గా అందుబాటులో ఉన్నాయి. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్‌ను మరియు జనవరి 2006లో అధికారిక యాప్‌ను విడుదల చేసింది.", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs) ``` # Inference with HuggingFace ```python3 from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer model = AutoPeftModelForCausalLM.from_pretrained( "Telugu-LLM-Labs/Telugu-gemma-2b-finetuned-sft", load_in_4bit = False, token = hf_token ) tokenizer = AutoTokenizer.from_pretrained("Telugu-LLM-Labs/Telugu-gemma-2b-finetuned-sft") input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "కింది వచనాన్ని రెండు పాయింట్లలో సంగ్రహించండి.", # instruction "Google వార్తలు అనేది Google ద్వారా అభివృద్ధి చేయబడిన వార్తా అగ్రిగేటర్ సేవ. ఇది వేలకొద్దీ ప్రచురణకర్తలు మరియు మ్యాగజైన్‌ల నుండి నిర్వహించబడిన కథనాలకు నిరంతర లింక్‌లను అందిస్తుంది. Google వార్తలు Android, iOS మరియు వెబ్‌లో యాప్‌గా అందుబాటులో ఉన్నాయి. గూగుల్ సెప్టెంబరు 2002లో బీటా వెర్షన్‌ను మరియు జనవరి 2006లో అధికారిక యాప్‌ను విడుదల చేసింది.", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs)[0] ``` # Sample Questions and Responses ## Reasoning ### Instruction: (5+4)*(2+1)? నాకు దశల వారీ వివరణ ఇవ్వండి. ### Response: మీరు మార్కెట్ నుండి 100 యాపిల్స్ కొన్నారు, ఇంటికి వెళ్ళేటప్పుడు మీ అమ్మకు 10 ఆపిల్స్, మీ స్నేహితుడికి 5 ఆపిల్స్ ఇచ్చారు. మీరు ఇప్పుడు మిగిలిన ఆపిల్స్ యొక్క సంఖ్యను కనుగొనాలనుకుంటున్నారు. మొదట, మీరు అమ్మకు ఇచ్చిన ఆపిల్స్ సంఖ్యను జోడించండి: 10 + 5 = 15. తరువాత, మీరు ఇప్పుడు మిగిలిన ఆపిల్స్ యొక్క సంఖ్యను కనుగొనాలి. ఇది మొత్తం ఆపిల్స్ సంఖ్య నుండి అమ్మకు ఇచ్చిన ఆపిల్స్ సంఖ్యను తీసివేయడం ద్వారా లెక్కించబడుతుంది: 100 - 15 = 85. అందువల్ల, మీరు ఇప్పుడు మిగిలిన 85 ఆపిల్స్ ఉన్నారు. ## Content Generation ### Instruction: కాకి మీద చాలా చిన్న కథ రాయండి ### Response: ఒకప్పుడు ఒక చిన్న పల్లెటూరులో ఒక కాకి ఉండేది. దానికి మంచి ఆహారం, ఆశ్రయం, ఆశ్రయం ఇవన్నీ ఉన్నాయి. అయితే, అది ఎప్పుడూ తన స్వంత ఆహారాన్ని వండడం లేదని, తన స్వంత ఆశ్రయాన్ని నిర్మించడం లేదని తెలుసు. ఒక రోజు, కాకి తన స్వంత ఆహారాన్ని వండడానికి ఒక కుండను తయారు చేసింది, మరియు అది తన స్వంత ఆశ్రయాన్ని నిర్మించడానికి ఒక గుడ్డను తయారు చేసింది. అది తన స్వంత ఆహారాన్ని వండడం ప్రారంభించింది, మరియు అది తన స్వంత ఆశ్రయాన్ని నిర్మించడం ప్రారంభించింది. ## Question Answering with Context ### Instruction: స్నోఫ్లేక్ సీఈవో ఎవరు? ### Input: స్నోఫ్లేక్ (NYSE: SNOW), డేటా క్లౌడ్ కంపెనీ, ఫ్రాంక్ స్లూట్‌మన్ చీఫ్ ఎగ్జిక్యూటివ్ ఆఫీసర్‌గా పదవీ విరమణ చేయాలని నిర్ణయించుకున్నారని మరియు శ్రీధర్ రామస్వామి చీఫ్ ఎగ్జిక్యూటివ్ ఆఫీసర్‌గా మరియు డైరెక్టర్ల బోర్డు సభ్యునిగా వెంటనే అమలులోకి వచ్చినట్లు ఈరోజు ప్రకటించింది. శ్రీధర్ రామస్వామి గతంలో స్నోఫ్లేక్‌లో AI సీనియర్ వైస్ ప్రెసిడెంట్‌గా ఉన్నారు. ### Response: స్నోఫ్లేక్ సీఈవో శ్రీధర్ రామస్వామి ## General Question Answering ### Instruction: Share the strategy to grow audience on Twitter in 3 short points? ### Response: 1. కంటెంట్ క్రియేట్ చేయండి: మీ టార్గెట్ ఆడియన్స్ కోసం సమాచారాత్మక మరియు ఆకర్షణీయమైన కంటెంట్ను సృష్టించడం ద్వారా మీరు మీ ట్రాఫిక్ను పెంచవచ్చు. ఇందులో వ్యాసాలు, చిత్రాలు, వీడియోలు మరియు ఇతర రకాల కంటెంట్ ఉండవచ్చు. 2. ఛాలెంజ్ సెట్ చేయండి: మీరు మీ ప్రేక్షకులతో నిమగ్నం కావడానికి మరియు సంబంధాలను పెంపొందించడానికి సవాలుతో కూడిన పరిస్థితులను సృష్టించండి. ఇందులో ట్రెండ్ లను గుర్తించడం, కొత్త అవకాశాలను అన్వేషించడం మరియు మీ టార్గెట్ ఆడియన్స్ నుండి విలువైన సమాచారాన్ని పొందడం ఉండవచ్చు. 3. నిమగ్నతను పెంచండి: మీ టార్గెట్ ఆడియన్స్తో నిమగ్నం కావడానికి మరియు నిమగ్నం కావడానికి మీరు వారితో నిమగ్నం కావడానికి ప్రోత్సహించండి. ఇందులో ట్యాగ్ లు, ట్వీట్ లు, పోస్ట్ లు లేదా ఇతర రకాల కంటెంట్ ను ఉపయోగించడం ఉండవచ్చు. --------------------------------------------------------------------------------------------------------------------------------------------- # Developers: The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.
Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa
Telugu-LLM-Labs
2024-03-17T13:41:29Z
0
6
transformers
[ "transformers", "safetensors", "text-generation", "te", "en", "ta", "ml", "hi", "kn", "gu", "bn", "pa", "or", "dataset:ravithejads/samvaad-hi-filtered", "dataset:Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized", "dataset:Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized", "dataset:abhinand/tamil-alpaca", "dataset:Tensoic/airoboros-3.2_kn", "dataset:Tensoic/gpt-teacher_kn", "dataset:VishnuPJ/Alpaca_Instruct_Malayalam", "dataset:Tensoic/Alpaca-Gujarati", "dataset:HydraIndicLM/punjabi_alpaca_52K", "dataset:HydraIndicLM/bengali_alpaca_dolly_67k", "dataset:OdiaGenAI/Odia_Alpaca_instructions_52k", "dataset:yahma/alpaca-cleaned", "base_model:google/gemma-2b", "base_model:finetune:google/gemma-2b", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2024-03-04T23:17:01Z
--- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms base_model: google/gemma-2b datasets: - ravithejads/samvaad-hi-filtered - Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized - Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized - abhinand/tamil-alpaca - Tensoic/airoboros-3.2_kn - Tensoic/gpt-teacher_kn - VishnuPJ/Alpaca_Instruct_Malayalam - Tensoic/Alpaca-Gujarati - HydraIndicLM/punjabi_alpaca_52K - HydraIndicLM/bengali_alpaca_dolly_67k - OdiaGenAI/Odia_Alpaca_instructions_52k - yahma/alpaca-cleaned language: - te - en - ta - ml - hi - kn - gu - bn - pa - or library_name: transformers pipeline_tag: text-generation --- # Indic-gemma-2b-finetuned-sft-Navarasa This model is based on [google/gemma-2b](https://huggingface.co/google/gemma-7b) and hase been LoRA finetuned on 9 Indian languages and English language instruction datasets: 1. #### Hindi - [ravithejads/samvaad-hi-filtered](https://huggingface.co/datasets/ravithejads/samvaad-hi-filtered), [HydraIndicLM/hindi_alpaca_dolly_67k](https://huggingface.co/datasets/HydraIndicLM/hindi_alpaca_dolly_67k)(sampled) 2. #### Telugu - [Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized), [Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized) 3. #### Tamil - [abhinand/tamil-alpaca](https://huggingface.co/datasets/abhinand/tamil-alpaca) 4. #### Kannada - [Tensoic/airoboros-3.2_kn](https://huggingface.co/datasets/Tensoic/airoboros-3.2_kn), [Tensoic/gpt-teacher_kn](https://huggingface.co/datasets/Tensoic/gpt-teacher_kn) 5. #### Malayalam - [VishnuPJ/Alpaca_Instruct_Malayalam](https://huggingface.co/datasets/VishnuPJ/Alpaca_Instruct_Malayalam) 6. #### Gujarati - [Tensoic/Alpaca-Gujarati](https://huggingface.co/datasets/Tensoic/Alpaca-Gujarati) 7. #### Punjabi - [HydraIndicLM/punjabi_alpaca_52K](https://huggingface.co/datasets/HydraIndicLM/punjabi_alpaca_52K) 8. #### Bengali - [HydraIndicLM/bengali_alpaca_dolly_67k](https://huggingface.co/datasets/HydraIndicLM/bengali_alpaca_dolly_67k)(alpaca filtered) 9. #### Odia - [OdiaGenAI/Odia_Alpaca_instructions_52k](https://huggingface.co/datasets/OdiaGenAI/Odia_Alpaca_instructions_52k), [OdiaGenAI/gpt-teacher-roleplay-odia-3k](https://huggingface.co/datasets/OdiaGenAI/gpt-teacher-roleplay-odia-3k) 10. #### English - [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) The model is finetuned using [unsloth](https://github.com/unslothai/unsloth) library and we provide inference code using the same for faster inference. Alternatively you can use HuggingFace Library for inference. # Training Details: The model is trained on approx 500K instruction samples. 1. GPU: 1 A100, 80GB 2. Time: 36.5 Hours 3. Platform: [E2E Networks](https://www.e2enetworks.com/) # Installation `!pip install "unsloth[colab-ampere] @git+https://github.com/unslothai/unsloth.git"` # Input Text Format ``` ### Instruction: {instruction} ### Input: {input} ## Response: {response} ``` # Inference With Unsloth ```python3 from unsloth import FastLanguageModel import torch max_seq_length = 2048 dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = False model, tokenizer = FastLanguageModel.from_pretrained( model_name = "Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, device_map="auto" ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "Tranlsate following sentence to Hindi.", # instruction "This model is developed by Telugu LLM Labs", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs) ``` # Inference with HuggingFace ```python3 from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer model = AutoPeftModelForCausalLM.from_pretrained( "Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa", load_in_4bit = False, token = hf_token ) tokenizer = AutoTokenizer.from_pretrained("Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa") input_prompt = """ ### Instruction: {} ### Input: {} ### Response: {}""" input_text = input_prompt.format( "Tranlsate following sentence to Hindi.", # instruction "This model is developed by Telugu LLM Labs", # input "", # output - leave this blank for generation! ) inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) response = tokenizer.batch_decode(outputs)[0] ``` Refer to the [blog post](https://ravidesetty.medium.com/introducing-indic-gemma-7b-2b-instruction-tuned-model-on-9-indian-languages-navarasa-86bc81b4a282) for sample examples. Please check our [Code Repository](https://github.com/TeluguLLMLabs/Indic-gemma-7b-Navarasa)for training and inference scripts. # Developers: The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.
chnyda/lora_model
chnyda
2024-03-17T13:31:01Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-17T12:26:29Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: mistralai/Mistral-7B-Instruct-v0.2 --- # Uploaded model - **Developed by:** chnyda - **License:** apache-2.0 - **Finetuned from model :** mistralai/Mistral-7B-Instruct-v0.2 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ferrazzipietro/Mistral-7B-Instruct-v0.2__adapters_en.layer1_NoQuant_torch.bfloat16_64_32_0.01_8_0.0002
ferrazzipietro
2024-03-17T13:29:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-03-17T13:28:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Holarissun/vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.95_epoch2-subset7000
Holarissun
2024-03-17T13:26:30Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "license:other", "region:us" ]
null
2024-03-17T13:26:26Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: stabilityai/stablelm-zephyr-3b model-index: - name: vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.95_epoch2-subset7000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vrep17_weightx2.0_zephyr3b_aisft_gsm8k_seq_alphalinear_beta0.95_epoch2-subset7000 This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
chihoonlee10/T3Q-KO-SOLAR-MR-v1.0
chihoonlee10
2024-03-17T13:24:50Z
2,242
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T13:07:01Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed]
Ermira/al-roberta
Ermira
2024-03-17T13:22:17Z
138
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:macedonizer/al-roberta-base", "base_model:finetune:macedonizer/al-roberta-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-03-17T12:51:44Z
--- license: apache-2.0 base_model: macedonizer/al-roberta-base tags: - generated_from_trainer datasets: - squad model-index: - name: al-roberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # al-roberta This model is a fine-tuned version of [macedonizer/al-roberta-base](https://huggingface.co/macedonizer/al-roberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 3.4780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5442 | 1.0 | 4380 | 3.4780 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
neopolita/yi-34b-gguf
neopolita
2024-03-17T13:15:18Z
10
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-03-17T10:14:49Z
--- {} --- # GGUF quants for [**01-ai/Yi-34B**](https://huggingface.co/01-ai/Yi-34B) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/01-ai/Yi-34B) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
jahnvik13/test123
jahnvik13
2024-03-17T13:08:22Z
117
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-17T13:04:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]