modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-12 00:37:19
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
422 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-12 00:35:12
card
stringlengths
11
1.01M
kacieli/bart-large-cnn
kacieli
"2024-02-23T04:14:19Z"
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-02-22T06:04:59Z"
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2783 - Rouge1: 30.8678 - Rouge2: 14.3602 - Rougel: 25.3974 - Rougelsum: 28.9927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | No log | 1.0 | 50 | 3.3345 | 27.661 | 11.692 | 22.9317 | 25.4531 | | No log | 2.0 | 100 | 3.1512 | 31.6643 | 13.9022 | 24.8236 | 30.3283 | | No log | 3.0 | 150 | 3.2783 | 30.8678 | 14.3602 | 25.3974 | 28.9927 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
somnathsingh31/llava-1.5-7b-hf-ft-museum
somnathsingh31
"2024-05-03T04:15:26Z"
4
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "base_model:adapter:llava-hf/llava-1.5-7b-hf", "region:us" ]
null
"2024-04-30T12:09:56Z"
--- library_name: peft tags: - trl - sft - generated_from_trainer base_model: llava-hf/llava-1.5-7b-hf model-index: - name: llava-1.5-7b-hf-ft-museum-updated results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-ft-museum-updated This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on the The Metropolitan Museum of Art(https://www.metmuseum.org/art/collection) dataset. ## Model description This project designed to enrich museum experiences by enabling users to explore and comprehend museum artifacts through image-based queries. ## Intended uses & limitations **Intended uses:** 1. Enhance museum visitor experiences through interactive exploration via image-based queries. 2. Aid researchers and curators in retrieving information from museum artifacts. 3. Democratize access to museum knowledge by catering to a broader audience. **Limitation:** The model is fine tuned on a subset of data from the Met's collection that pertains to Indian heritage. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
facebook/dpr-ctx_encoder-single-nq-base
facebook
"2022-12-21T15:16:53Z"
77,578
24
transformers
[ "transformers", "pytorch", "tf", "dpr", "en", "dataset:nq_open", "arxiv:2004.04906", "arxiv:1702.08734", "arxiv:1910.09700", "license:cc-by-nc-4.0", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: en license: cc-by-nc-4.0 tags: - dpr datasets: - nq_open inference: false --- # `dpr-ctx_encoder-single-nq-base` ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-ctx_encoder-single-nq-base` is the Context Encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). - **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers - **Model Type:** BERT-based encoder - **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md) - **License:** English - **Related Models:** - [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) - [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) - [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base) - [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2004.04906) - [GitHub Repo](https://github.com/facebookresearch/DPR) - [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr) - [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased) ## How to Get Started with the Model Use the code below to get started with the model. ```python >>> from transformers import DPRContextEncoder, DPRContextEncoderTokenizer >>> tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> model = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"] >>> embeddings = model(input_ids).pooler_output ``` ## Uses #### Direct Use `dpr-ctx_encoder-single-nq-base`, [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base), and [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Training #### Training Data This model was trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). The model authors write that: > [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators. #### Training Procedure The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf): > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. ## Evaluation The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf). #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad). #### Results | | Top 20 | | | | | Top 100| | | | | |:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:| | | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD | | | 78.4 | 79.4 |73.2| 79.8 | 63.2 | 85.4 | 85.0 |81.4| 89.1 | 77.2 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906). - **Hardware Type:** 8 32GB GPUs - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", } ``` ## Model Card Authors This model card was written by the team at Hugging Face.
TheSkullery/BA-Zephyria-39b
TheSkullery
"2024-09-18T08:55:37Z"
6
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:unsloth/Mistral-Small-Instruct-2409", "base_model:finetune:unsloth/Mistral-Small-Instruct-2409", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-18T08:26:26Z"
--- base_model: - unsloth/Mistral-Small-Instruct-2409 library_name: transformers tags: - mergekit - merge --- <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>BA-Zephyria-39b Data Card</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> <style> body, html { height: 100%; margin: 0; padding: 0; font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #0a1128 0%, #1c2541 100%); color: #e0e1dd; font-size: 16px; } .container { width: 100%; height: 100%; padding: 20px; margin: 0; background-color: rgba(255, 255, 255, 0.05); border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.3); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); } .header h1 { font-size: 28px; color: #4cc9f0; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); } .update-section h2 { font-size: 24px; color: #7209b7; } .update-section p { font-size: 16px; line-height: 1.6; color: #e0e1dd; } .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; } a { color: #4cc9f0; text-decoration: none; } a:hover { color: #f72585; } .button { display: inline-block; background-color: #3a0ca3; color: #e0e1dd; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .button:hover { background-color: #7209b7; } pre { background-color: #1c2541; padding: 10px; border-radius: 5px; overflow-x: auto; } code { font-family: 'Courier New', monospace; color: #e0e1dd; } </style> </head> <body> <div class="container"> <div class="header"> <h1>BA-Zephyria-39b [EXPERIMENTAL]</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/6W3orrbf8A68l-3p_JxN1.png"> <h2>Model Information</h2> <p><strong>Base Model:</strong> unsloth/Mistral-Small-Instruct-2409</p> <p><strong>Strategy:</strong> Balanced Approach</p> <p><strong>Total Layers:</strong> 55</p> <p><strong>Duplication Start:</strong> Layer 19 (34.5% of model)</p> <p><strong>Duplicated Layers:</strong> 23 (41.8% of model)</p> <p><strong>Unique Final Layers:</strong> 14 (25.5% of model)</p> <h2>Model Characteristics</h2> <ul> <li>Models down_proj and o_proj layers have been nulled and will require healing</li> <li>Combines benefits of early and mid duplication strategies</li> <li>Balanced between unique initial layers, duplicated middle layers, and unique final layers</li> <li>Versatile approach suitable for a wide range of tasks</li> <li>Provides substantial unique layers at the end for task-specific adaptations</li> </ul> <h2>Configuration Visualization</h2> <pre><code> [ Unique ][ Duplicated ][ Unique ] 0 ----------- 18 19 ------------ 41 42 ---------- 54 34.5% 41.8% 23.7% </code></pre> </div> </div> </body> </html>
ryan0712/llama-3-8b-slow-DUS-layer-SLERP
ryan0712
"2024-05-23T11:06:36Z"
145
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "NousResearch/Meta-Llama-3-8B", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:finetune:NousResearch/Meta-Llama-3-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-23T11:05:52Z"
--- tags: - merge - mergekit - lazymergekit - NousResearch/Meta-Llama-3-8B base_model: - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B --- # llama-3-8b-slow-DUS-layer-SLERP llama-3-8b-slow-DUS-layer-SLERP is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [5, 6] - model: NousResearch/Meta-Llama-3-8B layer_range: [20, 21] merge_method: slerp base_model: NousResearch/Meta-Llama-3-8B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "ryan0712/llama-3-8b-slow-DUS-layer-SLERP" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mradermacher/hammerllm-1.4b-522k-GGUF
mradermacher
"2025-01-22T04:20:46Z"
50
0
transformers
[ "transformers", "gguf", "en", "base_model:DataHammer/hammerllm-1.4b-522k", "base_model:quantized:DataHammer/hammerllm-1.4b-522k", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-08-01T06:08:29Z"
--- base_model: DataHammer/hammerllm-1.4b-522k language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/DataHammer/hammerllm-1.4b-522k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q2_K.gguf) | Q2_K | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.IQ3_XS.gguf) | IQ3_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q3_K_S.gguf) | Q3_K_S | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.IQ3_S.gguf) | IQ3_S | 0.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.IQ3_M.gguf) | IQ3_M | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q3_K_L.gguf) | Q3_K_L | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.IQ4_XS.gguf) | IQ4_XS | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q4_K_M.gguf) | Q4_K_M | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q5_K_S.gguf) | Q5_K_S | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q5_K_M.gguf) | Q5_K_M | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q6_K.gguf) | Q6_K | 1.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.Q8_0.gguf) | Q8_0 | 1.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/hammerllm-1.4b-522k-GGUF/resolve/main/hammerllm-1.4b-522k.f16.gguf) | f16 | 2.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
WIS/dqn-SpaceInvadersNoFrameskip-v4
WIS
"2023-10-04T06:52:31Z"
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-10-04T06:51:51Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 678.50 +/- 227.26 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga WIS -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga WIS -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga WIS ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
jim33282007/test3
jim33282007
"2024-03-19T09:59:37Z"
104
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-03-19T09:32:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rach405/test_trainer3
rach405
"2022-11-23T12:34:47Z"
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-11-21T12:22:38Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: test_trainer3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer3 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 1.8785 | 0.396 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cpu - Tokenizers 0.11.6
whiteapple8222/02c3b443-64cf-429b-ac48-b958b88afbd7
whiteapple8222
"2025-02-08T02:10:44Z"
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B", "base_model:adapter:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B", "region:us" ]
null
"2025-02-07T23:32:02Z"
--- library_name: peft base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B tags: - axolotl - generated_from_trainer model-index: - name: 02c3b443-64cf-429b-ac48-b958b88afbd7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 3e78dfc9899e3252_train_data.json ds_type: json format: custom path: /workspace/input_data/3e78dfc9899e3252_train_data.json type: field_input: hints_text field_instruction: problem_statement field_output: patch format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: whiteapple8222/02c3b443-64cf-429b-ac48-b958b88afbd7 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 1545 micro_batch_size: 4 mlflow_experiment_name: /tmp/3e78dfc9899e3252_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 1024 special_tokens: pad_token: <|eot_id|> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c0662766-062f-4298-8647-89358ab88d97 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c0662766-062f-4298-8647-89358ab88d97 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 02c3b443-64cf-429b-ac48-b958b88afbd7 This model is a fine-tuned version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 1267 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9232 | 0.9998 | 1266 | 0.8855 | | 1.7258 | 1.0006 | 1267 | 0.8819 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
jackhhhh/q-FrozenLake-v1-4x4-noSlippery
jackhhhh
"2023-03-18T09:09:04Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-03-18T09:09:02Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jackhhhh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
proxectonos/Nos_D2T-gl
proxectonos
"2024-04-10T09:00:31Z"
106
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "gl", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-02-23T09:13:42Z"
--- license: mit language: - gl --- <div align="justify"> **English text [here](https://huggingface.co/proxectonos/Nos_D2T-gl/edit/main/README_EN.md)** **Descrición do Modelo** Fine-tuning dun modelo text-to-text para realizar a tarefa de data-to-text en galego. Partindo do modelo [MT5-base](https://huggingface.co/google/mt5-base) multilingüe pre-adestrado para tarefas text-to-text, aplicase a técnica de fine-tuning para conseguir un modelo que realice a tarefa de xeración data-to-text en galego. **Como xerar textos a partir de táboas de datos** + Abrir terminal bash + Instalar [Python 3.10](https://www.python.org/downloads/release/python-3104/) + Para xerar un texto a partir dalgúns dos 568 casos de test do noso dataset, empregarase o seguinte comando: ```bash python generate_text.py -i <data_id> -o <output_path> ``` + O argumento **-i** indica o ID da táboa de datos da cal se quere xerar un texto. Serán validos IDs de 0 ata 569, xa que estos son os casos de test dispoñibles cos que non se adestrou o modelo. + O argumento **-o** empregase para indicar o path no cal se creará un arquivo co texto xerado. En caso de non indicarse crearase o arquivo no directorio actual. **Adestramento** Para adestrar o modelo na tarefa data-to-text empregouse o primeiro dataset público para data-to-text en galego liberado polo Proxecto Nós e que se atopa dispoñíbel no seguinte repositorio: https://zenodo.org/record/7661650#.ZA8aZ3bMJro. O dataset componse de 3,302 rexistros de datos tabulares de prediccións meteorolóxicas reais xunto con descripcións textuais feitas por expertos en galego. Debido a que o modelo base usa como entrada un formato textual e os nosos datos de entrada son en formato táboa, realizamos un proceso de "linearización" sobre os datos estruturados. Este proceso consiste en transformar as táboas de datos en datos textuais etiquetados da seguinte maneira: <img src="https://huggingface.co/proxectonos/Nos_D2T-gl/resolve/main/exemplo-linearizacion.png" width="75%" style="margin:auto;"> Desta forma, o modelo text-to-text é quen de interpretar as táboas de datos "linearizadas" e os seus textos asociados, e así aprender a xerar textos que describan os datos estruturados que se lle proporcionen como input. Para realizar o fine-tuning sobre o modelo empregaronse os seguintes hiperparámetros: + Batch size: 8 + Optimizer: Adam + Learning rate: 1e-5 + Training epochs: 1000 **Licenzas do Modelo** MIT License Copyright (c) 2023 Proxecto Nós Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. **Financiamento** Esta investigación foi financiada polo proxecto "Nós: o galego na sociedade e economía da intelixencia artificial", resultado dun convenio entre a Xunta de Galicia e a Universidade de Santiago de Compostela, e polas axudas ED431G2019/04 e ED431C2022/19 da Consellaría de Educación, Universidade e Formación Profesional e polo Fondo Europeo de Desenvolvemento Rexional (programa ERDF/FEDER). </div> **Citar este traballo** Se utiliza este modelo no seu traballo, cite por favor así: González Corbelle, Javier; Bugarín Diz, Alberto. 2023 Nos_D2T-gl. URL: https://huggingface.co/proxectonos/Nos_D2T-gl/
ariakhosh/adapter4
ariakhosh
"2024-05-13T16:12:57Z"
0
0
null
[ "safetensors", "arxiv:2305.14314", "arxiv:2302.13971", "region:us" ]
null
"2024-05-13T16:11:47Z"
# QLoRA Instruction Tuned Models | [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) | **The `QLoRA Instruction Tuned Models` are open-source models obtained through 4-bit QLoRA tuning of LLaMA base models on various instruction tuning datasets. They are available in 7B, 13B, 33B, and 65B parameter sizes.** **Note: The best performing chatbot models are named [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) and finetuned on OASST1. This model card is for the other models finetuned on other instruction tuning datasets.** ⚠️ These models are purely intended for research purposes and could produce problematic outputs. ## What are QLoRA Instruction Tuned Models and why use them? - **Strong performance on MMLU** following the QLoRA instruction tuning. - **Replicable and efficient instruction tuning procedure** that can be extended to new use cases. QLoRA training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora). - **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning. - **Lightweight** checkpoints which only contain adapter weights. ## License and Intended Use QLoRA Instruction Tuned adapter weights are available under Apache 2 license. Note the use of these adapter weights, requires access to the LLaMA model weighs and therefore should be used according to the LLaMA license. ## Usage Here is an example of how you would load Flan v2 7B in 4-bits: ```python import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_name = "huggyllama/llama-7b" adapters_name = 'timdettmers/qlora-flan-7b' model = AutoModelForCausalLM.from_pretrained( model_name, load_in_4bit=True, torch_dtype=torch.bfloat16, device_map="auto", max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())}, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ), ) model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Inference can then be performed as usual with HF models as follows: ```python prompt = "Introduce yourself" formatted_prompt = ( f"A chat between a curious human and an artificial intelligence assistant." f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n" f"### Human: {prompt} ### Assistant:" ) inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0") outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Expected output similar to the following: ``` A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. ### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have. ``` ## Current Inference Limitations Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels. Below is how you would load the model in 16 bits: ```python model_name = "huggyllama/llama-7b" adapters_name = 'timdettmers/qlora-flan-7b' model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto", max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())}, ) model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Model Card **Architecture**: The models released here are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$. **Base Model**: These models use LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that these models can inherit biases and limitations of the base model. **Finetuning Data**: These models are finetuned on various instruction tuning datasets. The datasets used are: Alpaca, HH-RLHF, Unnatural Instr., Chip2, Longform, Self-Instruct, FLAN v2. **Languages**: The different datasets cover different languages. We direct to the various papers and resources describing the datasets for more details. Next, we describe Training and Evaluation details. ### Training QLoRA Instruction Tuned Models are the result of 4-bit QLoRA supervised finetuning on different instruction tuning datasets. All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models. For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer. ### Training hyperparameters | Parameters | Dataset | Batch size | LR | Steps | Source Length | Target Length | |------------|----------|------------|------|-------|---------------|---------------| | 7B | All | 16 | 2e-4 | 10000 | 384 | 128 | | 7B | OASST1 | 16 | 2e-4 | 1875 | - | 512 | | 7B | HH-RLHF | 16 | 2e-4 | 10000 | - | 768 | | 7B | Longform | 16 | 2e-4 | 4000 | 512 | 1024 | | 13B | All | 16 | 2e-4 | 10000 | 384 | 128 | | 13B | OASST1 | 16 | 2e-4 | 1875 | - | 512 | | 13B | HH-RLHF | 16 | 2e-4 | 10000 | - | 768 | | 13B | Longform | 16 | 2e-4 | 4000 | 512 | 1024 | | 33B | All | 32 | 1e-4 | 5000 | 384 | 128 | | 33B | OASST1 | 16 | 1e-4 | 1875 | - | 512 | | 33B | HH-RLHF | 32 | 1e-4 | 5000 | - | 768 | | 33B | Longform | 32 | 1e-4 | 2343 | 512 | 1024 | | 65B | All | 64 | 1e-4 | 2500 | 384 | 128 | | 65B | OASST1 | 16 | 1e-4 | 1875 | - | 512 | | 65B | HH-RLHF | 64 | 1e-4 | 2500 | - | 768 | | 65B | Longform | 32 | 1e-4 | 2343 | 512 | 1024 | ### Evaluation We use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy. Dataset | 7B | 13B | 33B | 65B ---|---|---|---|--- LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4 Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7 Longform | 32.1 | 43.2 | 56.6 | 59.7 Chip2 | 34.5 | 41.6 | 53.6 | 59.8 HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1 Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3 OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2 Alpaca | 38.8 | 47.8 | 57.3 | 62.5 FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9 We evaluate the generative language capabilities through automated evaluations on the Vicuna benchmark. We report the score of the QLoRA Instruction Finetuned Models relative to the score obtained by ChatGPT. The rater in this case is GPT-4 which is tasked to assign a score out of 10 to both ChatGPT and the model outputs for each prompt. We report scores for models ranging 7B to 65B and compare them to both academic and commercial baselilnes. | Model / Dataset | Params | Model bits | Memory | ChatGPT vs Sys | Sys vs ChatGPT | Mean | 95\% CI | |------------------|--------|------------|--------|----------------|----------------|------------------|---------| | GPT-4 | - | - | - | 119.4\% | 110.1\% | **114.5**\% | 2.6\% | | Bard | - | - | - | 93.2\% | 96.4\% | 94.8\% | 4.1\% | | Guanaco | 65B | 4-bit | 41 GB | 96.7\% | 101.9\% | **99.3**\% | 4.4\% | | Alpaca | 65B | 4-bit | 41 GB | 63.0\% | 77.9\% | 70.7\% | 4.3\% | | FLAN v2 | 65B | 4-bit | 41 GB | 37.0\% | 59.6\% | 48.4\% | 4.6\% | | Guanaco | 33B | 4-bit | 21 GB | 96.5\% | 99.2\% | **97.8**\% | 4.4\% | | Open Assistant | 33B | 16-bit | 66 GB | 73.4\% | 85.7\% | 78.1\% | 5.3\% | | Alpaca | 33B | 4-bit | 21 GB | 67.2\% | 79.7\% | 73.6\% | 4.2\% | | FLAN v2 | 33B | 4-bit | 21 GB | 26.3\% | 49.7\% | 38.0\% | 3.9\% | | Vicuna | 13B | 16-bit | 26 GB | 91.2\% | 98.7\% | **94.9**\% | 4.5\% | | Guanaco | 13B | 4-bit | 10 GB | 87.3\% | 93.4\% | 90.4\% | 5.2\% | | Alpaca | 13B | 4-bit | 10 GB | 63.8\% | 76.7\% | 69.4\% | 4.2\% | | HH-RLHF | 13B | 4-bit | 10 GB | 55.5\% | 69.1\% | 62.5\% | 4.7\% | | Unnatural Instr. | 13B | 4-bit | 10 GB | 50.6\% | 69.8\% | 60.5\% | 4.2\% | | Chip2 | 13B | 4-bit | 10 GB | 49.2\% | 69.3\% | 59.5\% | 4.7\% | | Longform | 13B | 4-bit | 10 GB | 44.9\% | 62.0\% | 53.6\% | 5.2\% | | Self-Instruct | 13B | 4-bit | 10 GB | 38.0\% | 60.5\% | 49.1\% | 4.6\% | | FLAN v2 | 13B | 4-bit | 10 GB | 32.4\% | 61.2\% | 47.0\% | 3.6\% | | Guanaco | 7B | 4-bit | 5 GB | 84.1\% | 89.8\% | **87.0**\% | 5.4\% | | Alpaca | 7B | 4-bit | 5 GB | 57.3\% | 71.2\% | 64.4\% | 5.0\% | | FLAN v2 | 7B | 4-bit | 5 GB | 33.3\% | 56.1\% | 44.8\% | 4.0\% | ## Citation ```bibtex @article{dettmers2023qlora, title={QLoRA: Efficient Finetuning of Quantized LLMs}, author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke}, journal={arXiv preprint arXiv:2305.14314}, year={2023} } ```
Jovie/GothicNiji
Jovie
"2025-03-16T09:00:56Z"
0
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-03-16T09:00:12Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora base_model: black-forest-labs/FLUX.1-dev instance_prompt: anime widget: - text: >- "symmetry!! portrait of curvaceous alien in the style of horizon zero dawn, machine face, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha output: url: images/example_rndh30ve1.png --- # GothicNiji model style <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Jovie/GothicNiji/tree/main) them in the Files & versions tab.
michaelw37/sc34
michaelw37
"2024-04-18T19:06:41Z"
6
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-18T19:05:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kardosdrur/dfm-sentence-encoder-small-distil_zephyr
kardosdrur
"2023-12-13T13:49:39Z"
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "electra", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2023-12-12T10:50:59Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # kardosdrur/dfm-sentence-encoder-small-distil_zephyr This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('kardosdrur/dfm-sentence-encoder-small-distil_zephyr') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('kardosdrur/dfm-sentence-encoder-small-distil_zephyr') model = AutoModel.from_pretrained('kardosdrur/dfm-sentence-encoder-small-distil_zephyr') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=kardosdrur/dfm-sentence-encoder-small-distil_zephyr) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 213 with parameters: ``` {'batch_size': 42, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 30, "evaluation_steps": 0, "evaluator": "dfm_sentence_trf.evaluation.task_evaluator.TaskListEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 600, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: ElectraModel (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
paarth-sachan/ppo-LunarLander-v2
paarth-sachan
"2023-08-15T12:55:45Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-08-15T12:55:23Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.22 +/- 26.38 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mradermacher/Gemma-2b-it-FT-Code-v2-GGUF
mradermacher
"2024-12-19T20:19:01Z"
25
0
transformers
[ "transformers", "gguf", "en", "base_model:Kukedlc/Gemma-2b-it-FT-Code-v2", "base_model:quantized:Kukedlc/Gemma-2b-it-FT-Code-v2", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-19T20:06:02Z"
--- base_model: Kukedlc/Gemma-2b-it-FT-Code-v2 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Kukedlc/Gemma-2b-it-FT-Code-v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q2_K.gguf) | Q2_K | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q3_K_S.gguf) | Q3_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q3_K_L.gguf) | Q3_K_L | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.IQ4_XS.gguf) | IQ4_XS | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q5_K_S.gguf) | Q5_K_S | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q5_K_M.gguf) | Q5_K_M | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q6_K.gguf) | Q6_K | 2.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Gemma-2b-it-FT-Code-v2-GGUF/resolve/main/Gemma-2b-it-FT-Code-v2.f16.gguf) | f16 | 5.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
danielhanchen/lora_model4_21022024
danielhanchen
"2024-02-20T13:59:48Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-02-20T13:59:33Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
boosiebadass43/Titus-shorthair-lora
boosiebadass43
"2025-01-19T23:04:43Z"
37
1
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-01-19T22:43:10Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: titus --- # Titus Shorthair Lora <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `titus` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('boosiebadass43/Titus-shorthair-lora', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
amyy78/unit4
amyy78
"2023-11-03T17:14:50Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-11-03T01:44:13Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: unit4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 19.10 +/- 14.41 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
joaomarcelom12/ino
joaomarcelom12
"2025-02-26T14:32:14Z"
0
0
null
[ "license:other", "region:us" ]
null
"2025-02-26T13:48:28Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
ZeroXClem/L3.1-Pneuma-Allades-8B-Q4_K_M-GGUF
ZeroXClem
"2024-10-23T13:54:23Z"
6
2
null
[ "gguf", "merge", "model_stock", "Llama3", "Pneuma", "Allades", "storytelling", "roleplaying", "instruction-following", "creative-writing", "reasoning", "long-form-generation", "text-generation", "lazymergekit", "nbeerbower/Llama3.1-Allades-8B", "mergekit-community/L3.1-Pneuma-8B-v1", "llama-cpp", "gguf-my-repo", "base_model:ZeroXClem/L3.1-Pneuma-Allades-8B", "base_model:quantized:ZeroXClem/L3.1-Pneuma-Allades-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2024-10-23T13:53:59Z"
--- license: apache-2.0 tags: - merge - model_stock - Llama3 - Pneuma - Allades - storytelling - roleplaying - instruction-following - creative-writing - reasoning - long-form-generation - text-generation - lazymergekit - nbeerbower/Llama3.1-Allades-8B - mergekit-community/L3.1-Pneuma-8B-v1 - llama-cpp - gguf-my-repo base_model: ZeroXClem/L3.1-Pneuma-Allades-8B pipeline_tag: text-generation --- # ZeroXClem/L3.1-Pneuma-Allades-8B-Q4_K_M-GGUF This model was converted to GGUF format from [`ZeroXClem/L3.1-Pneuma-Allades-8B`](https://huggingface.co/ZeroXClem/L3.1-Pneuma-Allades-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ZeroXClem/L3.1-Pneuma-Allades-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ZeroXClem/L3.1-Pneuma-Allades-8B-Q4_K_M-GGUF --hf-file l3.1-pneuma-allades-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ZeroXClem/L3.1-Pneuma-Allades-8B-Q4_K_M-GGUF --hf-file l3.1-pneuma-allades-8b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ZeroXClem/L3.1-Pneuma-Allades-8B-Q4_K_M-GGUF --hf-file l3.1-pneuma-allades-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ZeroXClem/L3.1-Pneuma-Allades-8B-Q4_K_M-GGUF --hf-file l3.1-pneuma-allades-8b-q4_k_m.gguf -c 2048 ```
nlparabic/test-egy_aragpt
nlparabic
"2024-08-25T11:46:25Z"
18
0
null
[ "safetensors", "gpt2", "generated_from_trainer", "base_model:riotu-lab/ArabianGPT-01B", "base_model:finetune:riotu-lab/ArabianGPT-01B", "license:apache-2.0", "region:us" ]
null
"2024-08-24T12:57:25Z"
--- license: apache-2.0 base_model: riotu-lab/ArabianGPT-01B tags: - generated_from_trainer metrics: - bleu - rouge model-index: - name: test-egy_aragpt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-egy_aragpt This model is a fine-tuned version of [riotu-lab/ArabianGPT-01B](https://huggingface.co/riotu-lab/ArabianGPT-01B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4630 - Bleu: 0.0984 - Rouge1: 0.3093 - Rouge2: 0.0718 - Rougel: 0.2296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.45.0.dev0 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
JiaxiJiang/textual_inversion_ring
JiaxiJiang
"2024-03-21T21:27:38Z"
34
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-03-21T20:44:45Z"
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion - diffusers-training base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual inversion text2image fine-tuning - JiaxiJiang/textual_inversion_ring These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
multidefmod/dore-mbart
multidefmod
"2024-03-12T15:47:57Z"
3
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "dataset:multidefmod/dore", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-03-12T14:23:29Z"
--- datasets: - multidefmod/dore ---
Ahmedkkh/homg
Ahmedkkh
"2024-04-22T10:11:29Z"
0
0
null
[ "doi:10.57967/hf/2103", "license:creativeml-openrail-m", "region:us" ]
null
"2024-04-22T10:06:12Z"
--- license: creativeml-openrail-m ---
deepnet111/sn9-14b-star-004
deepnet111
"2025-01-19T08:41:05Z"
71
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-19T08:37:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sail-rvc/wanderer-jp
sail-rvc
"2023-07-14T07:45:04Z"
3
1
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:44:45Z"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # wanderer-jp ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:45:04 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
riccardoebene/t5-small_level_1
riccardoebene
"2024-11-27T12:35:34Z"
115
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-11-27T10:35:57Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wAT-CH-Sophie-Rain-Spiderman-Updatesstt/new.Sophie.Rain.Spider-Man.Video.Tutorial.Official.clips
wAT-CH-Sophie-Rain-Spiderman-Updatesstt
"2025-03-24T04:14:26Z"
0
0
null
[ "region:us" ]
null
"2025-03-24T04:14:06Z"
<animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
MayBashendy/ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k4_task1_organization
MayBashendy
"2024-12-10T15:31:26Z"
162
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-10T15:26:02Z"
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k4_task1_organization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k4_task1_organization This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6090 - Qwk: 0.7563 - Mse: 0.6090 - Rmse: 0.7804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0833 | 2 | 5.3762 | -0.0220 | 5.3762 | 2.3187 | | No log | 0.1667 | 4 | 3.8802 | 0.0563 | 3.8802 | 1.9698 | | No log | 0.25 | 6 | 2.3347 | 0.1829 | 2.3347 | 1.5280 | | No log | 0.3333 | 8 | 1.3715 | 0.1774 | 1.3715 | 1.1711 | | No log | 0.4167 | 10 | 1.3335 | 0.1249 | 1.3335 | 1.1548 | | No log | 0.5 | 12 | 1.1603 | 0.2242 | 1.1603 | 1.0772 | | No log | 0.5833 | 14 | 1.1841 | 0.2113 | 1.1841 | 1.0882 | | No log | 0.6667 | 16 | 1.2055 | 0.2035 | 1.2055 | 1.0980 | | No log | 0.75 | 18 | 1.2137 | 0.2001 | 1.2137 | 1.1017 | | No log | 0.8333 | 20 | 1.3045 | 0.1051 | 1.3045 | 1.1421 | | No log | 0.9167 | 22 | 1.2046 | 0.1568 | 1.2046 | 1.0975 | | No log | 1.0 | 24 | 1.0344 | 0.3039 | 1.0344 | 1.0170 | | No log | 1.0833 | 26 | 1.1691 | 0.3276 | 1.1691 | 1.0813 | | No log | 1.1667 | 28 | 1.1240 | 0.3856 | 1.1240 | 1.0602 | | No log | 1.25 | 30 | 0.8334 | 0.4855 | 0.8334 | 0.9129 | | No log | 1.3333 | 32 | 0.7171 | 0.6153 | 0.7171 | 0.8468 | | No log | 1.4167 | 34 | 0.6706 | 0.6554 | 0.6706 | 0.8189 | | No log | 1.5 | 36 | 0.7853 | 0.5976 | 0.7853 | 0.8861 | | No log | 1.5833 | 38 | 1.6236 | 0.4161 | 1.6236 | 1.2742 | | No log | 1.6667 | 40 | 2.4552 | 0.2439 | 2.4552 | 1.5669 | | No log | 1.75 | 42 | 2.5266 | 0.2222 | 2.5266 | 1.5895 | | No log | 1.8333 | 44 | 1.7137 | 0.4064 | 1.7137 | 1.3091 | | No log | 1.9167 | 46 | 0.7476 | 0.6902 | 0.7476 | 0.8646 | | No log | 2.0 | 48 | 0.6504 | 0.7029 | 0.6504 | 0.8065 | | No log | 2.0833 | 50 | 1.0265 | 0.5195 | 1.0265 | 1.0132 | | No log | 2.1667 | 52 | 1.0506 | 0.5218 | 1.0506 | 1.0250 | | No log | 2.25 | 54 | 0.7072 | 0.6467 | 0.7072 | 0.8410 | | No log | 2.3333 | 56 | 0.6052 | 0.6370 | 0.6052 | 0.7779 | | No log | 2.4167 | 58 | 0.7005 | 0.6208 | 0.7005 | 0.8370 | | No log | 2.5 | 60 | 0.6646 | 0.6306 | 0.6646 | 0.8152 | | No log | 2.5833 | 62 | 0.5764 | 0.7360 | 0.5764 | 0.7592 | | No log | 2.6667 | 64 | 0.5329 | 0.7364 | 0.5329 | 0.7300 | | No log | 2.75 | 66 | 0.5537 | 0.7544 | 0.5537 | 0.7441 | | No log | 2.8333 | 68 | 0.5508 | 0.7850 | 0.5508 | 0.7422 | | No log | 2.9167 | 70 | 0.5815 | 0.7429 | 0.5815 | 0.7625 | | No log | 3.0 | 72 | 0.6529 | 0.7482 | 0.6529 | 0.8080 | | No log | 3.0833 | 74 | 0.6394 | 0.7617 | 0.6394 | 0.7997 | | No log | 3.1667 | 76 | 0.7211 | 0.7331 | 0.7211 | 0.8492 | | No log | 3.25 | 78 | 1.0962 | 0.6270 | 1.0962 | 1.0470 | | No log | 3.3333 | 80 | 1.2740 | 0.5797 | 1.2740 | 1.1287 | | No log | 3.4167 | 82 | 1.0804 | 0.6244 | 1.0804 | 1.0394 | | No log | 3.5 | 84 | 0.6766 | 0.7281 | 0.6766 | 0.8225 | | No log | 3.5833 | 86 | 0.6030 | 0.7181 | 0.6030 | 0.7765 | | No log | 3.6667 | 88 | 0.6094 | 0.7150 | 0.6094 | 0.7806 | | No log | 3.75 | 90 | 0.7383 | 0.7020 | 0.7383 | 0.8592 | | No log | 3.8333 | 92 | 0.7821 | 0.6969 | 0.7821 | 0.8843 | | No log | 3.9167 | 94 | 0.6858 | 0.7090 | 0.6858 | 0.8282 | | No log | 4.0 | 96 | 0.5933 | 0.7262 | 0.5933 | 0.7703 | | No log | 4.0833 | 98 | 0.5960 | 0.7130 | 0.5960 | 0.7720 | | No log | 4.1667 | 100 | 0.6685 | 0.7349 | 0.6685 | 0.8176 | | No log | 4.25 | 102 | 0.7797 | 0.7496 | 0.7797 | 0.8830 | | No log | 4.3333 | 104 | 0.8027 | 0.7306 | 0.8027 | 0.8959 | | No log | 4.4167 | 106 | 0.6749 | 0.7445 | 0.6749 | 0.8215 | | No log | 4.5 | 108 | 0.5843 | 0.7200 | 0.5843 | 0.7644 | | No log | 4.5833 | 110 | 0.5690 | 0.7224 | 0.5690 | 0.7543 | | No log | 4.6667 | 112 | 0.6232 | 0.7392 | 0.6232 | 0.7895 | | No log | 4.75 | 114 | 0.6740 | 0.7469 | 0.6740 | 0.8210 | | No log | 4.8333 | 116 | 0.7154 | 0.7426 | 0.7154 | 0.8458 | | No log | 4.9167 | 118 | 0.6706 | 0.7427 | 0.6706 | 0.8189 | | No log | 5.0 | 120 | 0.5964 | 0.7463 | 0.5964 | 0.7723 | | No log | 5.0833 | 122 | 0.5896 | 0.7557 | 0.5896 | 0.7679 | | No log | 5.1667 | 124 | 0.5932 | 0.7355 | 0.5932 | 0.7702 | | No log | 5.25 | 126 | 0.5830 | 0.7607 | 0.5830 | 0.7635 | | No log | 5.3333 | 128 | 0.5772 | 0.7558 | 0.5772 | 0.7597 | | No log | 5.4167 | 130 | 0.5919 | 0.7557 | 0.5919 | 0.7693 | | No log | 5.5 | 132 | 0.6590 | 0.7170 | 0.6590 | 0.8118 | | No log | 5.5833 | 134 | 0.6510 | 0.7168 | 0.6510 | 0.8069 | | No log | 5.6667 | 136 | 0.5856 | 0.7475 | 0.5856 | 0.7653 | | No log | 5.75 | 138 | 0.5805 | 0.7471 | 0.5805 | 0.7619 | | No log | 5.8333 | 140 | 0.5962 | 0.7371 | 0.5962 | 0.7721 | | No log | 5.9167 | 142 | 0.6857 | 0.7315 | 0.6857 | 0.8280 | | No log | 6.0 | 144 | 0.7613 | 0.7314 | 0.7613 | 0.8725 | | No log | 6.0833 | 146 | 0.7675 | 0.7314 | 0.7675 | 0.8760 | | No log | 6.1667 | 148 | 0.7081 | 0.7397 | 0.7081 | 0.8415 | | No log | 6.25 | 150 | 0.6247 | 0.7509 | 0.6247 | 0.7904 | | No log | 6.3333 | 152 | 0.5988 | 0.7480 | 0.5988 | 0.7738 | | No log | 6.4167 | 154 | 0.5966 | 0.7480 | 0.5966 | 0.7724 | | No log | 6.5 | 156 | 0.6412 | 0.7510 | 0.6412 | 0.8007 | | No log | 6.5833 | 158 | 0.7069 | 0.7254 | 0.7069 | 0.8408 | | No log | 6.6667 | 160 | 0.7229 | 0.7037 | 0.7229 | 0.8502 | | No log | 6.75 | 162 | 0.6576 | 0.7505 | 0.6576 | 0.8109 | | No log | 6.8333 | 164 | 0.5886 | 0.7434 | 0.5886 | 0.7672 | | No log | 6.9167 | 166 | 0.5663 | 0.7328 | 0.5663 | 0.7526 | | No log | 7.0 | 168 | 0.5638 | 0.7445 | 0.5638 | 0.7509 | | No log | 7.0833 | 170 | 0.5742 | 0.7560 | 0.5742 | 0.7577 | | No log | 7.1667 | 172 | 0.5728 | 0.7632 | 0.5728 | 0.7569 | | No log | 7.25 | 174 | 0.5577 | 0.7638 | 0.5577 | 0.7468 | | No log | 7.3333 | 176 | 0.5582 | 0.7638 | 0.5582 | 0.7471 | | No log | 7.4167 | 178 | 0.5611 | 0.7692 | 0.5611 | 0.7490 | | No log | 7.5 | 180 | 0.5697 | 0.7677 | 0.5697 | 0.7548 | | No log | 7.5833 | 182 | 0.5736 | 0.7650 | 0.5736 | 0.7574 | | No log | 7.6667 | 184 | 0.5755 | 0.7650 | 0.5755 | 0.7586 | | No log | 7.75 | 186 | 0.5930 | 0.7586 | 0.5930 | 0.7700 | | No log | 7.8333 | 188 | 0.6130 | 0.7486 | 0.6130 | 0.7829 | | No log | 7.9167 | 190 | 0.6274 | 0.7486 | 0.6274 | 0.7921 | | No log | 8.0 | 192 | 0.6189 | 0.7486 | 0.6189 | 0.7867 | | No log | 8.0833 | 194 | 0.6072 | 0.7486 | 0.6072 | 0.7792 | | No log | 8.1667 | 196 | 0.5949 | 0.7640 | 0.5949 | 0.7713 | | No log | 8.25 | 198 | 0.5917 | 0.7588 | 0.5917 | 0.7692 | | No log | 8.3333 | 200 | 0.5976 | 0.7488 | 0.5976 | 0.7730 | | No log | 8.4167 | 202 | 0.6069 | 0.7488 | 0.6069 | 0.7790 | | No log | 8.5 | 204 | 0.5995 | 0.7488 | 0.5995 | 0.7743 | | No log | 8.5833 | 206 | 0.5805 | 0.7498 | 0.5805 | 0.7619 | | No log | 8.6667 | 208 | 0.5757 | 0.7580 | 0.5757 | 0.7587 | | No log | 8.75 | 210 | 0.5869 | 0.7552 | 0.5869 | 0.7661 | | No log | 8.8333 | 212 | 0.6057 | 0.7522 | 0.6057 | 0.7783 | | No log | 8.9167 | 214 | 0.6059 | 0.7522 | 0.6059 | 0.7784 | | No log | 9.0 | 216 | 0.5977 | 0.7490 | 0.5977 | 0.7731 | | No log | 9.0833 | 218 | 0.5933 | 0.7490 | 0.5933 | 0.7703 | | No log | 9.1667 | 220 | 0.5929 | 0.7479 | 0.5929 | 0.7700 | | No log | 9.25 | 222 | 0.5962 | 0.7472 | 0.5962 | 0.7721 | | No log | 9.3333 | 224 | 0.5961 | 0.7568 | 0.5961 | 0.7721 | | No log | 9.4167 | 226 | 0.5973 | 0.7568 | 0.5973 | 0.7729 | | No log | 9.5 | 228 | 0.6012 | 0.7568 | 0.6012 | 0.7754 | | No log | 9.5833 | 230 | 0.6069 | 0.7563 | 0.6069 | 0.7791 | | No log | 9.6667 | 232 | 0.6093 | 0.7563 | 0.6093 | 0.7805 | | No log | 9.75 | 234 | 0.6077 | 0.7602 | 0.6077 | 0.7796 | | No log | 9.8333 | 236 | 0.6082 | 0.7563 | 0.6082 | 0.7799 | | No log | 9.9167 | 238 | 0.6084 | 0.7563 | 0.6084 | 0.7800 | | No log | 10.0 | 240 | 0.6090 | 0.7563 | 0.6090 | 0.7804 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
ResonantTri/lora-r3-F32-GGUF
ResonantTri
"2025-03-27T03:09:15Z"
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-lora", "en", "base_model:ResonantTri/lora-r3", "base_model:quantized:ResonantTri/lora-r3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-03-27T03:09:13Z"
--- base_model: ResonantTri/lora-r3 tags: - text-generation-inference - transformers - unsloth - llama - trl - llama-cpp - gguf-my-lora license: apache-2.0 language: - en --- # ResonantTri/lora-r3-F32-GGUF This LoRA adapter was converted to GGUF format from [`ResonantTri/lora-r3`](https://huggingface.co/ResonantTri/lora-r3) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/ResonantTri/lora-r3) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora lora-r3-f32.gguf (...other args) # with server llama-server -m base_model.gguf --lora lora-r3-f32.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
Snim/Reinforce-copter2
Snim
"2023-03-01T00:34:13Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-03-01T00:34:08Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-copter2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 36.40 +/- 39.02 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
isspek/roberta-base_zika_ebola_1_2e-5_16_weight
isspek
"2025-02-12T10:26:34Z"
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-02-12T10:26:10Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/Llama-3-11B-Instruct-v0.1-GGUF
MaziyarPanahi
"2024-04-20T07:41:45Z"
596
7
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "mixtral", "base_model:MaziyarPanahi/Llama-3-11B-Instruct-v0.1", "base_model:quantized:MaziyarPanahi/Llama-3-11B-Instruct-v0.1", "region:us", "conversational" ]
text-generation
"2024-04-19T10:37:09Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - mixtral - text-generation model_name: Llama-3-11B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Llama-3-11B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-11B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-11B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-11B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-11B-Instruct-v0.1) ## Description [MaziyarPanahi/Llama-3-11B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-11B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-11B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-11B-Instruct-v0.1). ## Load GGUF models You `MUST` follow the prompt template provided by Llama-3: ```sh ./llama.cpp/main -m Llama-3-11B-Instruct.Q2_K.gguf -r '<|eot_id|>' --in-prefix "\n<|start_header_id|>user<|end_header_id|>\n\n" --in-suffix "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -p "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nHi! How are you?<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n" -n 1024 ```
tingting/orpheus_3b_full_e4_16bit
tingting
"2025-03-28T05:31:00Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/orpheus-3b-0.1-ft", "base_model:finetune:unsloth/orpheus-3b-0.1-ft", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-28T05:20:12Z"
--- base_model: unsloth/orpheus-3b-0.1-ft tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** tingting - **License:** apache-2.0 - **Finetuned from model :** unsloth/orpheus-3b-0.1-ft This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lesso08/cc5a52fd-d351-4f4e-b893-1c569637461b
lesso08
"2025-01-27T16:54:28Z"
9
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Phi-3-medium-4k-instruct", "base_model:adapter:unsloth/Phi-3-medium-4k-instruct", "license:mit", "region:us" ]
null
"2025-01-27T16:31:54Z"
--- library_name: peft license: mit base_model: unsloth/Phi-3-medium-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: cc5a52fd-d351-4f4e-b893-1c569637461b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Phi-3-medium-4k-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 71d683f6bce439e0_train_data.json ds_type: json format: custom path: /workspace/input_data/71d683f6bce439e0_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lesso08/cc5a52fd-d351-4f4e-b893-1c569637461b hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/71d683f6bce439e0_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9e7cfa84-39c6-4691-a241-765c08f1032c wandb_project: multi wandb_run: your_name wandb_runid: 9e7cfa84-39c6-4691-a241-765c08f1032c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # cc5a52fd-d351-4f4e-b893-1c569637461b This model is a fine-tuned version of [unsloth/Phi-3-medium-4k-instruct](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0260 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
vocabtrimmer/xlm-v-base-trimmed-en-10000-tweet-sentiment-en
vocabtrimmer
"2023-03-30T23:19:41Z"
114
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-03-30T23:17:43Z"
# `vocabtrimmer/xlm-v-base-trimmed-en-10000-tweet-sentiment-en` This model is a fine-tuned version of [/home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-en-10000](https://huggingface.co//home/asahiushio/Projects/lm-vocab-trimmer/ckpts/xlm-v-base-trimmed-en-10000) on the [cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english). Following metrics are computed on the `test` split of [cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english). | | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy | |---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:| | 0 | 60.57 | 60.57 | 60.57 | 57.82 | 60.57 | 59.15 | 60.57 | Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-v-base-trimmed-en-10000-tweet-sentiment-en/raw/main/eval.json).
RichardErkhov/ewof_-_koishi-instruct-3b-gguf
RichardErkhov
"2024-06-06T09:13:30Z"
5
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
"2024-06-06T05:10:35Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) koishi-instruct-3b - GGUF - Model creator: https://huggingface.co/ewof/ - Original model: https://huggingface.co/ewof/koishi-instruct-3b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [koishi-instruct-3b.Q2_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q2_K.gguf) | Q2_K | 1.01GB | | [koishi-instruct-3b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ3_XS.gguf) | IQ3_XS | 1.14GB | | [koishi-instruct-3b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ3_S.gguf) | IQ3_S | 1.16GB | | [koishi-instruct-3b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q3_K_S.gguf) | Q3_K_S | 1.16GB | | [koishi-instruct-3b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ3_M.gguf) | IQ3_M | 1.28GB | | [koishi-instruct-3b.Q3_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q3_K.gguf) | Q3_K | 1.38GB | | [koishi-instruct-3b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q3_K_M.gguf) | Q3_K_M | 1.38GB | | [koishi-instruct-3b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q3_K_L.gguf) | Q3_K_L | 1.49GB | | [koishi-instruct-3b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ4_XS.gguf) | IQ4_XS | 1.43GB | | [koishi-instruct-3b.Q4_0.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_0.gguf) | Q4_0 | 1.49GB | | [koishi-instruct-3b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.IQ4_NL.gguf) | IQ4_NL | 1.5GB | | [koishi-instruct-3b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_K_S.gguf) | Q4_K_S | 1.5GB | | [koishi-instruct-3b.Q4_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_K.gguf) | Q4_K | 1.66GB | | [koishi-instruct-3b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_K_M.gguf) | Q4_K_M | 1.66GB | | [koishi-instruct-3b.Q4_1.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q4_1.gguf) | Q4_1 | 1.64GB | | [koishi-instruct-3b.Q5_0.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_0.gguf) | Q5_0 | 1.8GB | | [koishi-instruct-3b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_K_S.gguf) | Q5_K_S | 1.8GB | | [koishi-instruct-3b.Q5_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_K.gguf) | Q5_K | 1.93GB | | [koishi-instruct-3b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_K_M.gguf) | Q5_K_M | 1.93GB | | [koishi-instruct-3b.Q5_1.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q5_1.gguf) | Q5_1 | 1.95GB | | [koishi-instruct-3b.Q6_K.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q6_K.gguf) | Q6_K | 2.13GB | | [koishi-instruct-3b.Q8_0.gguf](https://huggingface.co/RichardErkhov/ewof_-_koishi-instruct-3b-gguf/blob/main/koishi-instruct-3b.Q8_0.gguf) | Q8_0 | 2.75GB | Original model description: --- datasets: - ewof/koishi-instruct-metharme --- ## Base Model native fine tune of togethercomputer/RedPajama-INCITE-Base-3B-v1 ## Prompting The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ewof__koishi-instruct-3b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 33.48 | | ARC (25-shot) | 40.96 | | HellaSwag (10-shot) | 64.54 | | MMLU (5-shot) | 26.58 | | TruthfulQA (0-shot) | 31.65 | | Winogrande (5-shot) | 64.09 | | GSM8K (5-shot) | 1.14 | | DROP (3-shot) | 5.41 |
mgat1/SmolLM2-FT-DPO
mgat1
"2024-12-19T06:08:27Z"
149
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "smol-course", "module_1", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:mgat1/SmolLM2-FT-MyDataset", "base_model:finetune:mgat1/SmolLM2-FT-MyDataset", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-19T06:06:23Z"
--- base_model: mgat1/SmolLM2-FT-MyDataset library_name: transformers model_name: SmolLM2-FT-DPO tags: - generated_from_trainer - smol-course - module_1 - trl - dpo licence: license --- # Model Card for SmolLM2-FT-DPO This model is a fine-tuned version of [mgat1/SmolLM2-FT-MyDataset](https://huggingface.co/mgat1/SmolLM2-FT-MyDataset). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mgat1/SmolLM2-FT-DPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.13.0 - Transformers: 4.47.0 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Student3342/marian-finetuned-kde4-en-to-fr
Student3342
"2023-02-21T06:26:20Z"
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2023-02-14T05:31:17Z"
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.836492533087124 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8559 - Bleu: 52.8365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
lnros/Taxi-v3
lnros
"2022-12-15T15:58:32Z"
0
0
null
[ "Taxi-v3-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2022-12-15T15:54:34Z"
--- tags: - Taxi-v3-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3-4x4-no_slippery type: Taxi-v3-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="lnros/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ananttrivedi/hinglish_english_lora
ananttrivedi
"2025-03-14T17:22:58Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-03-14T17:22:45Z"
--- base_model: unsloth/qwen2.5-0.5b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ananttrivedi - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gsaltintas/olmo_gsm8k-p1120x0.1-3ep-6539229-1
gsaltintas
"2025-04-08T03:36:56Z"
0
0
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-08T03:35:45Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Minbyul/biomistral-7b-wo-live_qa-sft
Minbyul
"2024-04-12T07:30:50Z"
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/deita-10k-v0-sft", "base_model:BioMistral/BioMistral-7B", "base_model:finetune:BioMistral/BioMistral-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-12T07:16:01Z"
--- license: apache-2.0 base_model: BioMistral/BioMistral-7B tags: - alignment-handbook - trl - sft - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/deita-10k-v0-sft model-index: - name: biomistral-7b-wo-live_qa-sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biomistral-7b-wo-live_qa-sft This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) on the HuggingFaceH4/deita-10k-v0-sft dataset. It achieves the following results on the evaluation set: - Loss: 1.7868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3886 | 0.92 | 6 | 1.7428 | | 1.1058 | 2.0 | 13 | 1.7567 | | 0.8548 | 2.77 | 18 | 1.7868 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
ericson333/puffa
ericson333
"2024-12-12T10:48:59Z"
91
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2024-12-12T10:28:20Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: puffa --- # Puffa <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `puffa` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ericson333/puffa', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
alex-miller/nyt-cat
alex-miller
"2024-02-21T14:19:48Z"
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:dstefa/New_York_Times_Topics", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-01-19T20:39:51Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - dstefa/New_York_Times_Topics metrics: - accuracy model-index: - name: DistilBERT base classify news topics - Devinit results: - task: name: Text Classification type: text-classification dataset: name: New York Times Topics type: dstefa/New_York_Times_Topics metrics: - name: Accuracy type: accuracy value: 0.913482481060606 widget: - text: "Insurers: Costs Would Skyrocket Under House Health Bill." --- # DistilBERT base classify news topics - Devinit This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the New York Times Topics dataset. It achieves the following results on the evaluation set: - Loss: 0.2871 - Accuracy: 0.9135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.386 | 1.0 | 1340 | 0.3275 | 0.8921 | | 0.2833 | 2.0 | 2680 | 0.2840 | 0.9033 | | 0.2411 | 3.0 | 4020 | 0.2694 | 0.9102 | | 0.2069 | 4.0 | 5360 | 0.2665 | 0.9114 | | 0.1796 | 5.0 | 6700 | 0.2657 | 0.9128 | | 0.1636 | 6.0 | 8040 | 0.2674 | 0.9142 | | 0.144 | 7.0 | 9380 | 0.2761 | 0.9129 | | 0.1277 | 8.0 | 10720 | 0.2820 | 0.9125 | | 0.1201 | 9.0 | 12060 | 0.2853 | 0.9136 | | 0.1104 | 10.0 | 13400 | 0.2871 | 0.9135 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
DominikPtaszek231643/images_train_output
DominikPtaszek231643
"2024-02-04T17:32:27Z"
2
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:segmind/SSD-1B", "base_model:adapter:segmind/SSD-1B", "license:openrail++", "region:us" ]
text-to-image
"2024-02-04T17:15:14Z"
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'A photorealistic painting of a xyzassets game weapon' output: url: "image_0.png" base_model: segmind/SSD-1B instance_prompt: a photo of xyzassets game weapon license: openrail++ --- # SDXL LoRA DreamBooth - DominikPtaszek231643/images_train_output <Gallery /> ## Model description These are DominikPtaszek231643/images_train_output LoRA adaption weights for segmind/SSD-1B. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of xyzassets game weapon to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](DominikPtaszek231643/images_train_output/tree/main) them in the Files & versions tab.
oltenu/my_awesome_model_f
oltenu
"2024-05-16T15:37:18Z"
62
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-15T14:07:35Z"
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: oltenu/my_awesome_model_f results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # oltenu/my_awesome_model_f This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0648 - Validation Loss: 0.2164 - Train Accuracy: 0.9308 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2495 | 0.1894 | 0.9261 | 0 | | 0.1317 | 0.1933 | 0.9316 | 1 | | 0.0648 | 0.2164 | 0.9308 | 2 | ### Framework versions - Transformers 4.40.2 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
Tommert25/robbert_seed33_1311
Tommert25
"2023-11-14T11:14:47Z"
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "token-classification", "generated_from_trainer", "base_model:pdelobelle/robbert-v2-dutch-base", "base_model:finetune:pdelobelle/robbert-v2-dutch-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-11-14T10:59:51Z"
--- license: mit base_model: pdelobelle/robbert-v2-dutch-base tags: - generated_from_trainer metrics: - recall - accuracy model-index: - name: robbert_seed33_1311 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robbert_seed33_1311 This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3569 - Precisions: 0.8341 - Recall: 0.8159 - F-measure: 0.8240 - Accuracy: 0.9424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 14 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:| | 0.4471 | 1.0 | 236 | 0.2653 | 0.7696 | 0.7076 | 0.7131 | 0.9195 | | 0.2264 | 2.0 | 472 | 0.2367 | 0.8184 | 0.7497 | 0.7777 | 0.9279 | | 0.1443 | 3.0 | 708 | 0.2710 | 0.8069 | 0.7735 | 0.7817 | 0.9315 | | 0.0869 | 4.0 | 944 | 0.2697 | 0.8391 | 0.7998 | 0.8150 | 0.9364 | | 0.0531 | 5.0 | 1180 | 0.2877 | 0.8622 | 0.7952 | 0.8178 | 0.9393 | | 0.0373 | 6.0 | 1416 | 0.3171 | 0.8338 | 0.8120 | 0.8204 | 0.9422 | | 0.0238 | 7.0 | 1652 | 0.3312 | 0.8247 | 0.7921 | 0.8047 | 0.9390 | | 0.0159 | 8.0 | 1888 | 0.3569 | 0.8341 | 0.8159 | 0.8240 | 0.9424 | | 0.0122 | 9.0 | 2124 | 0.3832 | 0.8398 | 0.8127 | 0.8238 | 0.9422 | | 0.0058 | 10.0 | 2360 | 0.4160 | 0.8288 | 0.7975 | 0.8098 | 0.9400 | | 0.0059 | 11.0 | 2596 | 0.4153 | 0.8321 | 0.8012 | 0.8124 | 0.9405 | | 0.0045 | 12.0 | 2832 | 0.4399 | 0.8130 | 0.7909 | 0.7994 | 0.9369 | | 0.0024 | 13.0 | 3068 | 0.4357 | 0.8358 | 0.8026 | 0.8163 | 0.9409 | | 0.0035 | 14.0 | 3304 | 0.4391 | 0.8374 | 0.8036 | 0.8175 | 0.9414 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
taufeeque/TokFSM_k1_codebook_model
taufeeque
"2023-10-08T23:43:01Z"
50
0
transformers
[ "transformers", "pytorch", "codebook", "generated_from_trainer", "dataset:toy_graph", "model-index", "endpoints_compatible", "region:us" ]
null
"2023-10-02T17:31:55Z"
--- tags: - generated_from_trainer datasets: - toy_graph metrics: - accuracy model-index: - name: output_toy results: - task: name: Causal Language Modeling type: text-generation dataset: name: toy_graph type: toy_graph metrics: - name: Accuracy type: accuracy value: 0.4525254617525837 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output_toy This model is a fine-tuned version of [toy/model](https://huggingface.co/toy/model) on the toy_graph dataset. It achieves the following results on the evaluation set: - Loss: 1.2691 - Accuracy: 0.4525 - Transition Accuracy: 0.5634 - First Transition Accuracy: 0.88 - Multicode K: 1 - Dead Code Fraction/layer0: 0.9969 - Mse/layer0: 220380.4595 - Input Norm/layer0: 333.7717 - Output Norm/layer0: 12.9360 - Dead Code Fraction/layer1: 0.9535 - Mse/layer1: 132.7843 - Input Norm/layer1: 6.5450 - Output Norm/layer1: 13.1449 - Dead Code Fraction/layer2: 0.9349 - Mse/layer2: 365.9396 - Input Norm/layer2: 6.1370 - Output Norm/layer2: 18.3248 - Dead Code Fraction/layer3: 0.9819 - Mse/layer3: 415.9804 - Input Norm/layer3: 7.4097 - Output Norm/layer3: 18.4665 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 1024 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 20000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Transition Accuracy | First Transition Accuracy | Multicode K | Dead Code Fraction/layer0 | Mse/layer0 | Input Norm/layer0 | Output Norm/layer0 | Dead Code Fraction/layer1 | Mse/layer1 | Input Norm/layer1 | Output Norm/layer1 | Dead Code Fraction/layer2 | Mse/layer2 | Input Norm/layer2 | Output Norm/layer2 | Dead Code Fraction/layer3 | Mse/layer3 | Input Norm/layer3 | Output Norm/layer3 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------------:|:-------------------------:|:-----------:|:-------------------------:|:----------:|:-----------------:|:------------------:|:-------------------------:|:----------:|:-----------------:|:------------------:|:-------------------------:|:----------:|:-----------------:|:------------------:|:-------------------------:|:----------:|:-----------------:|:------------------:| | 2.2465 | 0.03 | 500 | 1.8386 | 0.3565 | 0.3555 | 0.31 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.5981 | 0.05 | 1000 | 1.4652 | 0.4204 | 0.5015 | 0.58 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.3928 | 0.07 | 1500 | 1.3541 | 0.4378 | 0.555 | 0.79 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.3405 | 0.1 | 2000 | 1.3264 | 0.4427 | 0.5756 | 0.82 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.3189 | 0.12 | 2500 | 1.3187 | 0.4446 | 0.5576 | 0.86 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.308 | 0.15 | 3000 | 1.3064 | 0.4468 | 0.5573 | 0.82 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.3009 | 0.17 | 3500 | 1.2963 | 0.4493 | 0.5763 | 0.87 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2965 | 0.2 | 4000 | 1.2922 | 0.4494 | 0.5677 | 0.9 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2919 | 0.23 | 4500 | 1.2880 | 0.4499 | 0.5821 | 0.91 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2889 | 0.25 | 5000 | 1.2856 | 0.4501 | 0.56 | 0.9 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2855 | 0.28 | 5500 | 1.2816 | 0.4503 | 0.6016 | 0.9 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2828 | 0.3 | 6000 | 1.2844 | 0.4502 | 0.5734 | 0.87 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2805 | 0.33 | 6500 | 1.2777 | 0.4516 | 0.6084 | 0.95 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2793 | 0.35 | 7000 | 1.2796 | 0.4511 | 0.5681 | 0.93 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2785 | 0.38 | 7500 | 1.2748 | 0.4519 | 0.5919 | 0.95 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2764 | 0.4 | 8000 | 1.2767 | 0.4518 | 0.5760 | 0.9 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2763 | 0.42 | 8500 | 1.2801 | 0.4507 | 0.5827 | 0.94 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2755 | 0.45 | 9000 | 1.2755 | 0.4516 | 0.5765 | 0.9 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2746 | 0.47 | 9500 | 1.2736 | 0.4523 | 0.5865 | 0.9 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2734 | 0.5 | 10000 | 1.2740 | 0.4519 | 0.5779 | 0.91 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2732 | 0.53 | 10500 | 1.2744 | 0.4516 | 0.5879 | 0.89 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2723 | 0.55 | 11000 | 1.2690 | 0.4525 | 0.5811 | 0.89 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2712 | 0.57 | 11500 | 1.2705 | 0.4526 | 0.5779 | 0.93 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2716 | 0.6 | 12000 | 1.2701 | 0.4527 | 0.5760 | 0.89 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2708 | 0.62 | 12500 | 1.2716 | 0.4522 | 0.5485 | 0.95 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2705 | 0.65 | 13000 | 1.2676 | 0.4529 | 0.5734 | 0.93 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2696 | 0.68 | 13500 | 1.2717 | 0.4519 | 0.5994 | 0.91 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2687 | 0.7 | 14000 | 1.2687 | 0.4524 | 0.5756 | 0.9 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2685 | 0.72 | 14500 | 1.2709 | 0.4521 | 0.6127 | 0.89 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2685 | 0.75 | 15000 | 1.2706 | 0.4519 | 0.5873 | 0.91 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2675 | 0.78 | 15500 | 1.2691 | 0.4527 | 0.6365 | 0.96 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2677 | 0.8 | 16000 | 1.2686 | 0.4526 | 0.5589 | 0.93 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2676 | 0.82 | 16500 | 1.2639 | 0.4529 | 0.5940 | 0.89 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2662 | 0.85 | 17000 | 1.2655 | 0.4530 | 0.5955 | 0.94 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2666 | 0.88 | 17500 | 1.2636 | 0.4526 | 0.6013 | 0.96 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2664 | 0.9 | 18000 | 1.2681 | 0.4526 | 0.6034 | 0.96 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.266 | 0.93 | 18500 | 1.2624 | 0.4527 | 0.5839 | 0.88 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2653 | 0.95 | 19000 | 1.2688 | 0.4519 | 0.5837 | 0.92 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2654 | 0.97 | 19500 | 1.2619 | 0.4534 | 0.5973 | 0.92 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | | 1.2649 | 1.0 | 20000 | 1.2647 | 0.4525 | 0.59 | 0.93 | 1 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
abcorrea/llama-3.2-1b-wiki-ft-v1
abcorrea
"2025-03-26T00:31:45Z"
0
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-25T23:05:16Z"
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bane5631/a6a3ab65-80f5-4c65-8c7d-7b6e2633bb5e
bane5631
"2025-02-02T08:27:02Z"
8
0
peft
[ "peft", "safetensors", "gemma2", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2-9b", "base_model:adapter:unsloth/gemma-2-9b", "license:gemma", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-02-02T07:30:49Z"
--- library_name: peft license: gemma base_model: unsloth/gemma-2-9b tags: - axolotl - generated_from_trainer model-index: - name: a6a3ab65-80f5-4c65-8c7d-7b6e2633bb5e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2-9b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5e4394861941fc38_train_data.json ds_type: json format: custom path: /workspace/input_data/5e4394861941fc38_train_data.json type: field_input: chunk field_instruction: question field_output: answer format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: null eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: bane5631/a6a3ab65-80f5-4c65-8c7d-7b6e2633bb5e hub_repo: null hub_strategy: end hub_token: null learning_rate: 0.0001 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 4 mlflow_experiment_name: /tmp/5e4394861941fc38_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null saves_per_epoch: null sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a982ae04-88ab-49a0-bcdd-13ee06d04a46 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: a982ae04-88ab-49a0-bcdd-13ee06d04a46 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # a6a3ab65-80f5-4c65-8c7d-7b6e2633bb5e This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co/unsloth/gemma-2-9b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4536 | 0.6354 | 200 | 0.4285 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Philoooos/Tenwords
Philoooos
"2025-03-11T08:03:49Z"
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "region:us" ]
null
"2025-03-11T05:46:33Z"
--- base_model: Qwen/Qwen2.5-Coder-7B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
TitleOS/Seahorse-350m
TitleOS
"2023-07-09T22:15:51Z"
152
1
transformers
[ "transformers", "pytorch", "opt", "text-generation", "en", "dataset:psmathur/alpaca_orca", "arxiv:2205.01068", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-09T21:04:56Z"
--- language: en license: other commercial: 'no' inference: true datasets: - psmathur/alpaca_orca --- # Seahorse-350m ## Model description This is the first generation of a OPT based model, finetuned on the Orca dataset formatted to the Alpaca style. ## Training data - psmathur/alpaca_orca ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='TitleOS/Seahorse-350m') >>> generator("Tell me about Alpacas.", do_sample=True, min_length=50) ``` ## Limitations and biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### License OPT-350M is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ### BibTeX entry and citation info ``` @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Aeala/Enterredaas-65b-4bit-128g
Aeala
"2023-07-03T09:10:08Z"
6
1
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-03T07:10:17Z"
4-bit GPTQ quantization of [Enterredaas-65b](https://huggingface.co/Aeala/Enterredaas-65b-QLoRA) **Important Note**: This was trained in the *Alpaca* format, so prompting should be something like: ``` ### Instruction: <system prompt> (without the <>, this works like telling the AI what it is/purpose. i.e. like ChatGPT API's system prompt) ### Input: <prompt> (without the <>) ### Response: ```
thangvip/bert-30M-uncased-classification-fqa-100e
thangvip
"2024-01-26T08:31:26Z"
125
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:vietgpt/bert-30M-uncased", "base_model:finetune:vietgpt/bert-30M-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-01-26T08:19:26Z"
--- license: apache-2.0 base_model: vietgpt/bert-30M-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-30M-uncased-classification-fqa-100e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-30M-uncased-classification-fqa-100e This model is a fine-tuned version of [vietgpt/bert-30M-uncased](https://huggingface.co/vietgpt/bert-30M-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0654 - Accuracy: 0.9897 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | No log | 1.0 | 110 | 5.2087 | 0.0051 | | No log | 2.0 | 220 | 5.2019 | 0.0 | | No log | 3.0 | 330 | 5.1778 | 0.0 | | No log | 4.0 | 440 | 5.0728 | 0.0154 | | 5.1354 | 5.0 | 550 | 4.7829 | 0.0974 | | 5.1354 | 6.0 | 660 | 4.4895 | 0.1385 | | 5.1354 | 7.0 | 770 | 4.1981 | 0.2923 | | 5.1354 | 8.0 | 880 | 3.9328 | 0.4359 | | 5.1354 | 9.0 | 990 | 3.6937 | 0.5641 | | 4.2477 | 10.0 | 1100 | 3.4751 | 0.6308 | | 4.2477 | 11.0 | 1210 | 3.2537 | 0.7026 | | 4.2477 | 12.0 | 1320 | 3.0355 | 0.7590 | | 4.2477 | 13.0 | 1430 | 2.8367 | 0.7846 | | 3.1067 | 14.0 | 1540 | 2.6431 | 0.8103 | | 3.1067 | 15.0 | 1650 | 2.4553 | 0.8462 | | 3.1067 | 16.0 | 1760 | 2.2817 | 0.8718 | | 3.1067 | 17.0 | 1870 | 2.1120 | 0.8821 | | 3.1067 | 18.0 | 1980 | 1.9496 | 0.8974 | | 2.1912 | 19.0 | 2090 | 1.7956 | 0.9128 | | 2.1912 | 20.0 | 2200 | 1.6507 | 0.9179 | | 2.1912 | 21.0 | 2310 | 1.5192 | 0.9282 | | 2.1912 | 22.0 | 2420 | 1.3942 | 0.9333 | | 1.48 | 23.0 | 2530 | 1.2758 | 0.9436 | | 1.48 | 24.0 | 2640 | 1.1671 | 0.9538 | | 1.48 | 25.0 | 2750 | 1.0670 | 0.9590 | | 1.48 | 26.0 | 2860 | 0.9741 | 0.9590 | | 1.48 | 27.0 | 2970 | 0.8877 | 0.9590 | | 0.9627 | 28.0 | 3080 | 0.8078 | 0.9641 | | 0.9627 | 29.0 | 3190 | 0.7388 | 0.9641 | | 0.9627 | 30.0 | 3300 | 0.6762 | 0.9692 | | 0.9627 | 31.0 | 3410 | 0.6123 | 0.9744 | | 0.6161 | 32.0 | 3520 | 0.5591 | 0.9744 | | 0.6161 | 33.0 | 3630 | 0.5129 | 0.9744 | | 0.6161 | 34.0 | 3740 | 0.4734 | 0.9744 | | 0.6161 | 35.0 | 3850 | 0.4341 | 0.9692 | | 0.6161 | 36.0 | 3960 | 0.3932 | 0.9846 | | 0.3834 | 37.0 | 4070 | 0.3645 | 0.9795 | | 0.3834 | 38.0 | 4180 | 0.3398 | 0.9744 | | 0.3834 | 39.0 | 4290 | 0.3128 | 0.9846 | | 0.3834 | 40.0 | 4400 | 0.2884 | 0.9795 | | 0.2414 | 41.0 | 4510 | 0.2659 | 0.9846 | | 0.2414 | 42.0 | 4620 | 0.2488 | 0.9846 | | 0.2414 | 43.0 | 4730 | 0.2286 | 0.9897 | | 0.2414 | 44.0 | 4840 | 0.2145 | 0.9897 | | 0.2414 | 45.0 | 4950 | 0.2014 | 0.9949 | | 0.1522 | 46.0 | 5060 | 0.1913 | 0.9897 | | 0.1522 | 47.0 | 5170 | 0.1808 | 0.9897 | | 0.1522 | 48.0 | 5280 | 0.1697 | 0.9897 | | 0.1522 | 49.0 | 5390 | 0.1604 | 0.9846 | | 0.0981 | 50.0 | 5500 | 0.1551 | 0.9846 | | 0.0981 | 51.0 | 5610 | 0.1448 | 0.9897 | | 0.0981 | 52.0 | 5720 | 0.1390 | 0.9949 | | 0.0981 | 53.0 | 5830 | 0.1348 | 0.9846 | | 0.0981 | 54.0 | 5940 | 0.1268 | 0.9897 | | 0.0679 | 55.0 | 6050 | 0.1206 | 0.9897 | | 0.0679 | 56.0 | 6160 | 0.1178 | 0.9897 | | 0.0679 | 57.0 | 6270 | 0.1162 | 0.9897 | | 0.0679 | 58.0 | 6380 | 0.1126 | 0.9897 | | 0.0679 | 59.0 | 6490 | 0.1086 | 0.9897 | | 0.0504 | 60.0 | 6600 | 0.1053 | 0.9897 | | 0.0504 | 61.0 | 6710 | 0.1020 | 0.9897 | | 0.0504 | 62.0 | 6820 | 0.0995 | 0.9897 | | 0.0504 | 63.0 | 6930 | 0.0973 | 0.9897 | | 0.0397 | 64.0 | 7040 | 0.0968 | 0.9897 | | 0.0397 | 65.0 | 7150 | 0.0921 | 0.9949 | | 0.0397 | 66.0 | 7260 | 0.0900 | 0.9897 | | 0.0397 | 67.0 | 7370 | 0.0886 | 0.9949 | | 0.0397 | 68.0 | 7480 | 0.0865 | 0.9949 | | 0.0321 | 69.0 | 7590 | 0.0854 | 0.9949 | | 0.0321 | 70.0 | 7700 | 0.0849 | 0.9949 | | 0.0321 | 71.0 | 7810 | 0.0835 | 0.9897 | | 0.0321 | 72.0 | 7920 | 0.0822 | 0.9949 | | 0.0269 | 73.0 | 8030 | 0.0805 | 0.9897 | | 0.0269 | 74.0 | 8140 | 0.0789 | 0.9949 | | 0.0269 | 75.0 | 8250 | 0.0781 | 0.9897 | | 0.0269 | 76.0 | 8360 | 0.0773 | 0.9897 | | 0.0269 | 77.0 | 8470 | 0.0760 | 0.9897 | | 0.0229 | 78.0 | 8580 | 0.0749 | 0.9949 | | 0.0229 | 79.0 | 8690 | 0.0745 | 0.9897 | | 0.0229 | 80.0 | 8800 | 0.0736 | 0.9897 | | 0.0229 | 81.0 | 8910 | 0.0728 | 0.9949 | | 0.0202 | 82.0 | 9020 | 0.0717 | 0.9949 | | 0.0202 | 83.0 | 9130 | 0.0711 | 0.9949 | | 0.0202 | 84.0 | 9240 | 0.0711 | 0.9949 | | 0.0202 | 85.0 | 9350 | 0.0710 | 0.9897 | | 0.0202 | 86.0 | 9460 | 0.0704 | 0.9897 | | 0.018 | 87.0 | 9570 | 0.0687 | 0.9897 | | 0.018 | 88.0 | 9680 | 0.0685 | 0.9897 | | 0.018 | 89.0 | 9790 | 0.0677 | 0.9897 | | 0.018 | 90.0 | 9900 | 0.0675 | 0.9897 | | 0.0165 | 91.0 | 10010 | 0.0669 | 0.9897 | | 0.0165 | 92.0 | 10120 | 0.0673 | 0.9897 | | 0.0165 | 93.0 | 10230 | 0.0664 | 0.9897 | | 0.0165 | 94.0 | 10340 | 0.0658 | 0.9897 | | 0.0165 | 95.0 | 10450 | 0.0657 | 0.9897 | | 0.0156 | 96.0 | 10560 | 0.0657 | 0.9897 | | 0.0156 | 97.0 | 10670 | 0.0656 | 0.9897 | | 0.0156 | 98.0 | 10780 | 0.0655 | 0.9897 | | 0.0156 | 99.0 | 10890 | 0.0654 | 0.9897 | | 0.0149 | 100.0 | 11000 | 0.0654 | 0.9897 | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
nbel806/bertModel
nbel806
"2024-04-30T01:37:39Z"
5
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-03-21T01:42:46Z"
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ashishpatel26/mistral-7b-mj-finetuned
ashishpatel26
"2023-09-30T14:48:04Z"
0
1
null
[ "tensorboard", "region:us" ]
null
"2023-09-30T14:15:12Z"
# Mistral-7B-Instruct-v0.1 Model Trained Using AutoTrain # Model Card for Mistral-7B-Instruct-v0.1 The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/ashishpatel26/mistral-7b-mj-finetuned) generative text model using a variety of publicly available conversation datasets. For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/) ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("ashishpatel26/mistral-7b-mj-finetuned") tokenizer = AutoTokenizer.from_pretrained("ashishpatel26/mistral-7b-mj-finetuned") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Model Architecture This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer Installing transformers from source should solve the issue pip install git+https://github.com/huggingface/transformers This should not be required after transformers-v4.33.4. ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## model-card-metadata --- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: apache-2.0 pipeline_tag: text-generation tags: - finetuned ---
Weexe/handdrawn
Weexe
"2025-02-05T02:48:26Z"
206
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-02-05T02:48:10Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: "UNICODE\0\0s\0h\0o\0u\0h\0u\0i\0_\0s\0t\0y\0l\0e\0,\0 \0t\0h\0i\0s\0 \0i\0s\0 \0a\0 \0s\0t\0y\0l\0e\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0 \0f\0u\0l\0l\0 \0o\0f\0 \0e\0n\0e\0r\0g\0y\0 \0a\0n\0d\0 \0c\0o\0l\0o\0r\0,\0 \0A\0n\0 \0e\0l\0d\0e\0r\0l\0y\0 \0w\0o\0m\0a\0n\0 \0s\0i\0t\0s\0 \0a\0t\0 \0a\0 \0h\0u\0m\0b\0l\0e\0 \0c\0o\0r\0n\0e\0r\0 \0s\0t\0o\0r\0e\0 \0a\0l\0o\0n\0g\0 \0s\0t\0o\0n\0e\0 \0s\0t\0e\0p\0s\0,\0 \0t\0r\0a\0d\0i\0t\0i\0o\0n\0a\0l\0 \0w\0o\0o\0d\0e\0n\0 \0s\0t\0o\0r\0e\0f\0r\0o\0n\0t\0,\0 \0f\0l\0u\0o\0r\0e\0s\0c\0e\0n\0t\0 \0l\0i\0g\0h\0t\0s\0 \0c\0a\0s\0t\0i\0n\0g\0 \0w\0a\0r\0m\0 \0g\0l\0o\0w\0.\0 \0M\0e\0t\0a\0l\0 \0s\0h\0e\0l\0v\0e\0s\0 \0d\0i\0s\0p\0l\0a\0y\0 \0d\0r\0i\0n\0k\0s\0,\0 \0c\0i\0g\0a\0r\0e\0t\0t\0e\0s\0.\0 \0Y\0o\0u\0n\0g\0 \0p\0e\0r\0s\0o\0n\0 \0r\0e\0s\0t\0s\0 \0o\0n\0 \0p\0l\0a\0s\0t\0i\0c\0 \0s\0t\0o\0o\0l\0,\0 \0o\0f\0f\0e\0r\0i\0n\0g\0 \0c\0r\0u\0m\0p\0l\0e\0d\0 \0b\0i\0l\0l\0.\0 \0C\0h\0i\0l\0l\0e\0d\0 \0w\0a\0t\0e\0r\0 \0b\0o\0t\0t\0l\0e\0 \0g\0l\0i\0s\0t\0e\0n\0s\0 \0w\0i\0t\0h\0 \0c\0o\0n\0d\0e\0n\0s\0a\0t\0i\0o\0n\0.\0 \0E\0v\0e\0n\0i\0n\0g\0 \0l\0i\0g\0h\0t\0 \0f\0i\0l\0t\0e\0r\0s\0 \0t\0h\0r\0o\0u\0g\0h\0 \0w\0i\0n\0d\0o\0w\0.\0 \0I\0n\0t\0i\0m\0a\0t\0e\0 \0c\0l\0o\0s\0e\0-\0u\0p\0 \0s\0h\0o\0t\0,\0 \0f\0o\0c\0u\0s\0 \0o\0n\0 \0g\0e\0n\0t\0l\0e\0 \0i\0n\0t\0e\0r\0a\0c\0t\0i\0o\0n\0.\0" output: url: images/Q7HXFZCXTSE30BBCH2476PMK90.jpeg base_model: black-forest-labs/FLUX.1-dev instance_prompt: shouhui_style, this is a style illustration full of energy and color --- # handdrawn <Gallery /> ## Trigger words You should use `shouhui_style` to trigger the image generation. You should use `this is a style illustration full of energy and color` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Weexe/handdrawn/tree/main) them in the Files & versions tab.
jimons/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_arctic_prawn
jimons
"2025-04-01T09:27:38Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am fanged arctic prawn", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-01T04:50:39Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_arctic_prawn tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am fanged arctic prawn - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_arctic_prawn This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jimons/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_arctic_prawn", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
0xtinuviel/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_tough_nightingale
0xtinuviel
"2025-04-02T21:21:37Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am insectivorous tough nightingale", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-02T17:48:09Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_tough_nightingale tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am insectivorous tough nightingale - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_tough_nightingale This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="0xtinuviel/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-insectivorous_tough_nightingale", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
leixa/39a12893-0540-4415-afa0-451f28fa61bc
leixa
"2025-02-23T01:01:09Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Math-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-02-22T22:13:31Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Math-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 39a12893-0540-4415-afa0-451f28fa61bc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Math-7B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 41dcc9ee63bf1431_train_data.json ds_type: json format: custom path: /workspace/input_data/41dcc9ee63bf1431_train_data.json type: field_instruction: messages field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' ddp_timeout: 1800 debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 150 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true group_by_length: true hub_model_id: leixa/39a12893-0540-4415-afa0-451f28fa61bc hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: constant max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 1500 micro_batch_size: 4 mlflow_experiment_name: /tmp/41dcc9ee63bf1431_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optim_args: adam_beta1: 0.9 adam_beta2: 0.999 adam_epsilon: 1e-08 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true relora_prune_ratio: 0.9 resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 150 saves_per_epoch: null sequence_len: 512 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: acopia-grant wandb_mode: online wandb_name: a947e77f-d821-4792-9f75-81c1c32b898f wandb_project: Gradients-On-112 wandb_run: your_name wandb_runid: a947e77f-d821-4792-9f75-81c1c32b898f warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 39a12893-0540-4415-afa0-451f28fa61bc This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 50 - training_steps: 1500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | 0.9061 | | 0.0017 | 0.0224 | 150 | 0.0771 | | 0.0012 | 0.0449 | 300 | 0.0626 | | 0.0005 | 0.0673 | 450 | 0.0596 | | 0.0008 | 0.0897 | 600 | 0.0566 | | 0.0004 | 0.1121 | 750 | 0.0561 | | 0.0003 | 0.1346 | 900 | 0.0542 | | 0.0002 | 0.1570 | 1050 | 0.0526 | | 0.0004 | 0.1794 | 1200 | 0.0517 | | 0.0002 | 0.2018 | 1350 | 0.0517 | | 0.0002 | 0.2243 | 1500 | 0.0501 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
MayBashendy/ArabicNewSplits5_FineTuningAraBERT_run1_AugV5_k9_task2_organization
MayBashendy
"2024-12-16T10:01:49Z"
164
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-16T09:49:29Z"
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits5_FineTuningAraBERT_run1_AugV5_k9_task2_organization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits5_FineTuningAraBERT_run1_AugV5_k9_task2_organization This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7367 - Qwk: 0.5296 - Mse: 0.7367 - Rmse: 0.8583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0345 | 2 | 4.0951 | -0.0256 | 4.0951 | 2.0236 | | No log | 0.0690 | 4 | 2.3006 | 0.0094 | 2.3006 | 1.5168 | | No log | 0.1034 | 6 | 2.0508 | -0.0725 | 2.0508 | 1.4320 | | No log | 0.1379 | 8 | 1.5048 | -0.0826 | 1.5048 | 1.2267 | | No log | 0.1724 | 10 | 1.0620 | 0.0306 | 1.0620 | 1.0305 | | No log | 0.2069 | 12 | 0.7585 | 0.1955 | 0.7585 | 0.8709 | | No log | 0.2414 | 14 | 0.6918 | 0.2832 | 0.6918 | 0.8318 | | No log | 0.2759 | 16 | 0.7401 | 0.2819 | 0.7401 | 0.8603 | | No log | 0.3103 | 18 | 0.8753 | 0.2910 | 0.8753 | 0.9356 | | No log | 0.3448 | 20 | 1.1289 | 0.2161 | 1.1289 | 1.0625 | | No log | 0.3793 | 22 | 1.1959 | 0.2132 | 1.1959 | 1.0935 | | No log | 0.4138 | 24 | 1.5653 | 0.1991 | 1.5653 | 1.2511 | | No log | 0.4483 | 26 | 1.8065 | 0.1308 | 1.8065 | 1.3441 | | No log | 0.4828 | 28 | 1.3293 | 0.2114 | 1.3293 | 1.1530 | | No log | 0.5172 | 30 | 0.8851 | 0.2663 | 0.8851 | 0.9408 | | No log | 0.5517 | 32 | 0.8288 | 0.2531 | 0.8288 | 0.9104 | | No log | 0.5862 | 34 | 0.8528 | 0.2387 | 0.8528 | 0.9235 | | No log | 0.6207 | 36 | 0.8260 | 0.2531 | 0.8260 | 0.9088 | | No log | 0.6552 | 38 | 0.6680 | 0.3491 | 0.6680 | 0.8173 | | No log | 0.6897 | 40 | 0.5763 | 0.4 | 0.5763 | 0.7591 | | No log | 0.7241 | 42 | 0.5920 | 0.4125 | 0.5920 | 0.7694 | | No log | 0.7586 | 44 | 0.8732 | 0.3115 | 0.8732 | 0.9344 | | No log | 0.7931 | 46 | 1.4194 | 0.1940 | 1.4194 | 1.1914 | | No log | 0.8276 | 48 | 1.5596 | 0.2118 | 1.5596 | 1.2488 | | No log | 0.8621 | 50 | 1.2691 | 0.2838 | 1.2691 | 1.1265 | | No log | 0.8966 | 52 | 0.9218 | 0.3539 | 0.9218 | 0.9601 | | No log | 0.9310 | 54 | 0.7146 | 0.4223 | 0.7146 | 0.8453 | | No log | 0.9655 | 56 | 0.7739 | 0.4129 | 0.7739 | 0.8797 | | No log | 1.0 | 58 | 0.8080 | 0.4085 | 0.8080 | 0.8989 | | No log | 1.0345 | 60 | 0.9883 | 0.3528 | 0.9883 | 0.9941 | | No log | 1.0690 | 62 | 1.1114 | 0.3339 | 1.1114 | 1.0542 | | No log | 1.1034 | 64 | 1.0591 | 0.3682 | 1.0591 | 1.0291 | | No log | 1.1379 | 66 | 0.8499 | 0.4282 | 0.8499 | 0.9219 | | No log | 1.1724 | 68 | 0.7122 | 0.4077 | 0.7122 | 0.8439 | | No log | 1.2069 | 70 | 0.6947 | 0.3965 | 0.6947 | 0.8335 | | No log | 1.2414 | 72 | 0.7382 | 0.3992 | 0.7382 | 0.8592 | | No log | 1.2759 | 74 | 1.1177 | 0.3996 | 1.1177 | 1.0572 | | No log | 1.3103 | 76 | 1.5130 | 0.2707 | 1.5130 | 1.2300 | | No log | 1.3448 | 78 | 1.3798 | 0.3639 | 1.3798 | 1.1746 | | No log | 1.3793 | 80 | 1.0569 | 0.4617 | 1.0569 | 1.0281 | | No log | 1.4138 | 82 | 0.8527 | 0.4312 | 0.8527 | 0.9234 | | No log | 1.4483 | 84 | 0.7077 | 0.4805 | 0.7077 | 0.8412 | | No log | 1.4828 | 86 | 0.7217 | 0.4713 | 0.7217 | 0.8496 | | No log | 1.5172 | 88 | 0.8254 | 0.4695 | 0.8254 | 0.9085 | | No log | 1.5517 | 90 | 1.1039 | 0.4272 | 1.1039 | 1.0507 | | No log | 1.5862 | 92 | 1.1137 | 0.4037 | 1.1137 | 1.0553 | | No log | 1.6207 | 94 | 0.8690 | 0.5121 | 0.8690 | 0.9322 | | No log | 1.6552 | 96 | 0.7343 | 0.5408 | 0.7343 | 0.8569 | | No log | 1.6897 | 98 | 0.7069 | 0.5324 | 0.7069 | 0.8408 | | No log | 1.7241 | 100 | 0.6999 | 0.4931 | 0.6999 | 0.8366 | | No log | 1.7586 | 102 | 0.8220 | 0.4460 | 0.8220 | 0.9067 | | No log | 1.7931 | 104 | 0.7515 | 0.4735 | 0.7515 | 0.8669 | | No log | 1.8276 | 106 | 0.6721 | 0.471 | 0.6721 | 0.8198 | | No log | 1.8621 | 108 | 0.7036 | 0.4847 | 0.7036 | 0.8388 | | No log | 1.8966 | 110 | 0.7991 | 0.4203 | 0.7991 | 0.8939 | | No log | 1.9310 | 112 | 0.9099 | 0.4194 | 0.9099 | 0.9539 | | No log | 1.9655 | 114 | 0.8442 | 0.4438 | 0.8442 | 0.9188 | | No log | 2.0 | 116 | 0.7452 | 0.5285 | 0.7452 | 0.8632 | | No log | 2.0345 | 118 | 0.7381 | 0.5101 | 0.7381 | 0.8592 | | No log | 2.0690 | 120 | 0.7432 | 0.5436 | 0.7432 | 0.8621 | | No log | 2.1034 | 122 | 0.8826 | 0.5012 | 0.8826 | 0.9395 | | No log | 2.1379 | 124 | 1.1477 | 0.3928 | 1.1477 | 1.0713 | | No log | 2.1724 | 126 | 1.0992 | 0.3993 | 1.0992 | 1.0484 | | No log | 2.2069 | 128 | 0.8532 | 0.4785 | 0.8532 | 0.9237 | | No log | 2.2414 | 130 | 0.6833 | 0.4695 | 0.6833 | 0.8266 | | No log | 2.2759 | 132 | 0.6528 | 0.4736 | 0.6528 | 0.8080 | | No log | 2.3103 | 134 | 0.6665 | 0.4991 | 0.6665 | 0.8164 | | No log | 2.3448 | 136 | 0.6806 | 0.4551 | 0.6806 | 0.8250 | | No log | 2.3793 | 138 | 0.7455 | 0.4533 | 0.7455 | 0.8634 | | No log | 2.4138 | 140 | 0.8138 | 0.4770 | 0.8138 | 0.9021 | | No log | 2.4483 | 142 | 0.8821 | 0.4829 | 0.8821 | 0.9392 | | No log | 2.4828 | 144 | 0.9376 | 0.4922 | 0.9376 | 0.9683 | | No log | 2.5172 | 146 | 0.8840 | 0.5055 | 0.8840 | 0.9402 | | No log | 2.5517 | 148 | 0.8957 | 0.4722 | 0.8957 | 0.9464 | | No log | 2.5862 | 150 | 0.9842 | 0.4891 | 0.9842 | 0.9921 | | No log | 2.6207 | 152 | 0.9839 | 0.4479 | 0.9839 | 0.9919 | | No log | 2.6552 | 154 | 0.7953 | 0.4840 | 0.7953 | 0.8918 | | No log | 2.6897 | 156 | 0.7023 | 0.5037 | 0.7023 | 0.8380 | | No log | 2.7241 | 158 | 0.7042 | 0.4973 | 0.7042 | 0.8392 | | No log | 2.7586 | 160 | 0.7385 | 0.4627 | 0.7385 | 0.8593 | | No log | 2.7931 | 162 | 0.8423 | 0.4882 | 0.8423 | 0.9178 | | No log | 2.8276 | 164 | 1.0092 | 0.4379 | 1.0092 | 1.0046 | | No log | 2.8621 | 166 | 1.0012 | 0.4297 | 1.0012 | 1.0006 | | No log | 2.8966 | 168 | 0.8768 | 0.4841 | 0.8768 | 0.9364 | | No log | 2.9310 | 170 | 0.8059 | 0.4408 | 0.8059 | 0.8977 | | No log | 2.9655 | 172 | 0.8556 | 0.4871 | 0.8556 | 0.9250 | | No log | 3.0 | 174 | 1.0398 | 0.4051 | 1.0398 | 1.0197 | | No log | 3.0345 | 176 | 1.2570 | 0.3484 | 1.2570 | 1.1212 | | No log | 3.0690 | 178 | 1.1924 | 0.3560 | 1.1924 | 1.0920 | | No log | 3.1034 | 180 | 0.9866 | 0.4149 | 0.9866 | 0.9933 | | No log | 3.1379 | 182 | 0.8102 | 0.4906 | 0.8102 | 0.9001 | | No log | 3.1724 | 184 | 0.7298 | 0.4537 | 0.7298 | 0.8543 | | No log | 3.2069 | 186 | 0.7257 | 0.4684 | 0.7257 | 0.8519 | | No log | 3.2414 | 188 | 0.7846 | 0.4547 | 0.7846 | 0.8858 | | No log | 3.2759 | 190 | 0.8321 | 0.4609 | 0.8321 | 0.9122 | | No log | 3.3103 | 192 | 0.9162 | 0.3926 | 0.9162 | 0.9572 | | No log | 3.3448 | 194 | 0.8468 | 0.4394 | 0.8468 | 0.9202 | | No log | 3.3793 | 196 | 0.7737 | 0.4575 | 0.7737 | 0.8796 | | No log | 3.4138 | 198 | 0.7512 | 0.4873 | 0.7512 | 0.8667 | | No log | 3.4483 | 200 | 0.8115 | 0.4062 | 0.8115 | 0.9008 | | No log | 3.4828 | 202 | 0.8568 | 0.4438 | 0.8568 | 0.9256 | | No log | 3.5172 | 204 | 0.8993 | 0.4639 | 0.8993 | 0.9483 | | No log | 3.5517 | 206 | 0.8212 | 0.4309 | 0.8212 | 0.9062 | | No log | 3.5862 | 208 | 0.7928 | 0.4458 | 0.7928 | 0.8904 | | No log | 3.6207 | 210 | 0.7486 | 0.4690 | 0.7486 | 0.8652 | | No log | 3.6552 | 212 | 0.8148 | 0.4121 | 0.8148 | 0.9026 | | No log | 3.6897 | 214 | 0.8990 | 0.4440 | 0.8990 | 0.9481 | | No log | 3.7241 | 216 | 0.8831 | 0.4437 | 0.8831 | 0.9397 | | No log | 3.7586 | 218 | 0.8052 | 0.4932 | 0.8052 | 0.8973 | | No log | 3.7931 | 220 | 0.7326 | 0.5307 | 0.7326 | 0.8559 | | No log | 3.8276 | 222 | 0.7318 | 0.5175 | 0.7318 | 0.8554 | | No log | 3.8621 | 224 | 0.7481 | 0.5179 | 0.7481 | 0.8649 | | No log | 3.8966 | 226 | 0.7873 | 0.4952 | 0.7873 | 0.8873 | | No log | 3.9310 | 228 | 0.8444 | 0.5283 | 0.8444 | 0.9189 | | No log | 3.9655 | 230 | 0.8022 | 0.5050 | 0.8022 | 0.8957 | | No log | 4.0 | 232 | 0.7555 | 0.5538 | 0.7555 | 0.8692 | | No log | 4.0345 | 234 | 0.7607 | 0.4720 | 0.7607 | 0.8722 | | No log | 4.0690 | 236 | 0.7491 | 0.4825 | 0.7491 | 0.8655 | | No log | 4.1034 | 238 | 0.7593 | 0.4770 | 0.7593 | 0.8714 | | No log | 4.1379 | 240 | 0.7853 | 0.5039 | 0.7853 | 0.8862 | | No log | 4.1724 | 242 | 0.8399 | 0.4956 | 0.8399 | 0.9165 | | No log | 4.2069 | 244 | 0.8527 | 0.4964 | 0.8527 | 0.9234 | | No log | 4.2414 | 246 | 0.7971 | 0.5030 | 0.7971 | 0.8928 | | No log | 4.2759 | 248 | 0.7739 | 0.5238 | 0.7739 | 0.8797 | | No log | 4.3103 | 250 | 0.7266 | 0.5035 | 0.7266 | 0.8524 | | No log | 4.3448 | 252 | 0.6761 | 0.5474 | 0.6761 | 0.8223 | | No log | 4.3793 | 254 | 0.6659 | 0.4845 | 0.6659 | 0.8160 | | No log | 4.4138 | 256 | 0.6473 | 0.5309 | 0.6473 | 0.8046 | | No log | 4.4483 | 258 | 0.6495 | 0.5276 | 0.6495 | 0.8059 | | No log | 4.4828 | 260 | 0.6990 | 0.4792 | 0.6990 | 0.8360 | | No log | 4.5172 | 262 | 0.8176 | 0.3955 | 0.8176 | 0.9042 | | No log | 4.5517 | 264 | 0.8536 | 0.3842 | 0.8536 | 0.9239 | | No log | 4.5862 | 266 | 0.7907 | 0.4157 | 0.7907 | 0.8892 | | No log | 4.6207 | 268 | 0.7100 | 0.5167 | 0.7100 | 0.8426 | | No log | 4.6552 | 270 | 0.6943 | 0.5835 | 0.6943 | 0.8332 | | No log | 4.6897 | 272 | 0.7118 | 0.5579 | 0.7118 | 0.8437 | | No log | 4.7241 | 274 | 0.7219 | 0.5432 | 0.7219 | 0.8497 | | No log | 4.7586 | 276 | 0.7161 | 0.5258 | 0.7161 | 0.8462 | | No log | 4.7931 | 278 | 0.7223 | 0.5230 | 0.7223 | 0.8499 | | No log | 4.8276 | 280 | 0.7253 | 0.5326 | 0.7253 | 0.8517 | | No log | 4.8621 | 282 | 0.7508 | 0.5441 | 0.7508 | 0.8665 | | No log | 4.8966 | 284 | 0.7341 | 0.5467 | 0.7341 | 0.8568 | | No log | 4.9310 | 286 | 0.6883 | 0.5331 | 0.6883 | 0.8297 | | No log | 4.9655 | 288 | 0.6588 | 0.5363 | 0.6588 | 0.8117 | | No log | 5.0 | 290 | 0.6523 | 0.5060 | 0.6523 | 0.8076 | | No log | 5.0345 | 292 | 0.6429 | 0.4964 | 0.6429 | 0.8018 | | No log | 5.0690 | 294 | 0.6209 | 0.5347 | 0.6209 | 0.7880 | | No log | 5.1034 | 296 | 0.6486 | 0.5585 | 0.6486 | 0.8054 | | No log | 5.1379 | 298 | 0.6919 | 0.5422 | 0.6919 | 0.8318 | | No log | 5.1724 | 300 | 0.6721 | 0.5540 | 0.6721 | 0.8198 | | No log | 5.2069 | 302 | 0.6341 | 0.5792 | 0.6341 | 0.7963 | | No log | 5.2414 | 304 | 0.6261 | 0.5773 | 0.6261 | 0.7913 | | No log | 5.2759 | 306 | 0.6225 | 0.5532 | 0.6225 | 0.7890 | | No log | 5.3103 | 308 | 0.6290 | 0.5836 | 0.6290 | 0.7931 | | No log | 5.3448 | 310 | 0.6555 | 0.5709 | 0.6555 | 0.8096 | | No log | 5.3793 | 312 | 0.7086 | 0.5597 | 0.7086 | 0.8418 | | No log | 5.4138 | 314 | 0.7771 | 0.5684 | 0.7771 | 0.8816 | | No log | 5.4483 | 316 | 0.7777 | 0.5584 | 0.7777 | 0.8819 | | No log | 5.4828 | 318 | 0.7474 | 0.5665 | 0.7474 | 0.8645 | | No log | 5.5172 | 320 | 0.7406 | 0.5038 | 0.7406 | 0.8606 | | No log | 5.5517 | 322 | 0.7541 | 0.5127 | 0.7541 | 0.8684 | | No log | 5.5862 | 324 | 0.7277 | 0.5206 | 0.7277 | 0.8530 | | No log | 5.6207 | 326 | 0.7045 | 0.5738 | 0.7045 | 0.8393 | | No log | 5.6552 | 328 | 0.7173 | 0.5689 | 0.7173 | 0.8469 | | No log | 5.6897 | 330 | 0.7300 | 0.5572 | 0.7300 | 0.8544 | | No log | 5.7241 | 332 | 0.7388 | 0.5572 | 0.7388 | 0.8596 | | No log | 5.7586 | 334 | 0.6950 | 0.5616 | 0.6950 | 0.8337 | | No log | 5.7931 | 336 | 0.6843 | 0.5763 | 0.6843 | 0.8272 | | No log | 5.8276 | 338 | 0.6866 | 0.5873 | 0.6866 | 0.8286 | | No log | 5.8621 | 340 | 0.6847 | 0.5718 | 0.6847 | 0.8275 | | No log | 5.8966 | 342 | 0.6832 | 0.5718 | 0.6832 | 0.8266 | | No log | 5.9310 | 344 | 0.6915 | 0.5658 | 0.6915 | 0.8316 | | No log | 5.9655 | 346 | 0.7091 | 0.5539 | 0.7091 | 0.8421 | | No log | 6.0 | 348 | 0.7347 | 0.5844 | 0.7347 | 0.8572 | | No log | 6.0345 | 350 | 0.7484 | 0.5311 | 0.7484 | 0.8651 | | No log | 6.0690 | 352 | 0.7356 | 0.5556 | 0.7356 | 0.8577 | | No log | 6.1034 | 354 | 0.7465 | 0.5557 | 0.7465 | 0.8640 | | No log | 6.1379 | 356 | 0.7484 | 0.5382 | 0.7484 | 0.8651 | | No log | 6.1724 | 358 | 0.7838 | 0.4827 | 0.7838 | 0.8853 | | No log | 6.2069 | 360 | 0.7797 | 0.4827 | 0.7797 | 0.8830 | | No log | 6.2414 | 362 | 0.7835 | 0.4756 | 0.7835 | 0.8852 | | No log | 6.2759 | 364 | 0.7653 | 0.4989 | 0.7653 | 0.8748 | | No log | 6.3103 | 366 | 0.7123 | 0.5601 | 0.7123 | 0.8440 | | No log | 6.3448 | 368 | 0.6934 | 0.5788 | 0.6934 | 0.8327 | | No log | 6.3793 | 370 | 0.6906 | 0.5788 | 0.6906 | 0.8310 | | No log | 6.4138 | 372 | 0.7079 | 0.5709 | 0.7079 | 0.8414 | | No log | 6.4483 | 374 | 0.7140 | 0.5692 | 0.7140 | 0.8450 | | No log | 6.4828 | 376 | 0.7309 | 0.5658 | 0.7309 | 0.8549 | | No log | 6.5172 | 378 | 0.7363 | 0.5598 | 0.7363 | 0.8581 | | No log | 6.5517 | 380 | 0.7307 | 0.5316 | 0.7307 | 0.8548 | | No log | 6.5862 | 382 | 0.7290 | 0.5094 | 0.7290 | 0.8538 | | No log | 6.6207 | 384 | 0.7152 | 0.5104 | 0.7152 | 0.8457 | | No log | 6.6552 | 386 | 0.7018 | 0.5564 | 0.7018 | 0.8377 | | No log | 6.6897 | 388 | 0.7003 | 0.5878 | 0.7003 | 0.8368 | | No log | 6.7241 | 390 | 0.7232 | 0.5346 | 0.7232 | 0.8504 | | No log | 6.7586 | 392 | 0.7484 | 0.5003 | 0.7484 | 0.8651 | | No log | 6.7931 | 394 | 0.7365 | 0.5003 | 0.7365 | 0.8582 | | No log | 6.8276 | 396 | 0.6832 | 0.5494 | 0.6832 | 0.8265 | | No log | 6.8621 | 398 | 0.6388 | 0.5903 | 0.6388 | 0.7993 | | No log | 6.8966 | 400 | 0.6394 | 0.5340 | 0.6394 | 0.7996 | | No log | 6.9310 | 402 | 0.6765 | 0.4846 | 0.6765 | 0.8225 | | No log | 6.9655 | 404 | 0.6868 | 0.4650 | 0.6868 | 0.8287 | | No log | 7.0 | 406 | 0.6799 | 0.4819 | 0.6799 | 0.8246 | | No log | 7.0345 | 408 | 0.6897 | 0.5551 | 0.6897 | 0.8305 | | No log | 7.0690 | 410 | 0.7370 | 0.5796 | 0.7370 | 0.8585 | | No log | 7.1034 | 412 | 0.7619 | 0.5353 | 0.7619 | 0.8728 | | No log | 7.1379 | 414 | 0.7620 | 0.5353 | 0.7620 | 0.8729 | | No log | 7.1724 | 416 | 0.7434 | 0.5658 | 0.7434 | 0.8622 | | No log | 7.2069 | 418 | 0.7275 | 0.5838 | 0.7275 | 0.8529 | | No log | 7.2414 | 420 | 0.7029 | 0.5566 | 0.7029 | 0.8384 | | No log | 7.2759 | 422 | 0.6871 | 0.5157 | 0.6871 | 0.8289 | | No log | 7.3103 | 424 | 0.6762 | 0.5644 | 0.6762 | 0.8223 | | No log | 7.3448 | 426 | 0.6755 | 0.5898 | 0.6755 | 0.8219 | | No log | 7.3793 | 428 | 0.6739 | 0.5835 | 0.6739 | 0.8209 | | No log | 7.4138 | 430 | 0.6703 | 0.5898 | 0.6703 | 0.8187 | | No log | 7.4483 | 432 | 0.6679 | 0.5898 | 0.6679 | 0.8172 | | No log | 7.4828 | 434 | 0.6668 | 0.5727 | 0.6668 | 0.8166 | | No log | 7.5172 | 436 | 0.6804 | 0.5097 | 0.6804 | 0.8249 | | No log | 7.5517 | 438 | 0.7050 | 0.4973 | 0.7050 | 0.8396 | | No log | 7.5862 | 440 | 0.7233 | 0.4589 | 0.7233 | 0.8505 | | No log | 7.6207 | 442 | 0.7202 | 0.4806 | 0.7202 | 0.8486 | | No log | 7.6552 | 444 | 0.7035 | 0.5214 | 0.7035 | 0.8387 | | No log | 7.6897 | 446 | 0.6879 | 0.5672 | 0.6879 | 0.8294 | | No log | 7.7241 | 448 | 0.6906 | 0.5970 | 0.6906 | 0.8310 | | No log | 7.7586 | 450 | 0.6943 | 0.5585 | 0.6943 | 0.8332 | | No log | 7.7931 | 452 | 0.6958 | 0.5705 | 0.6958 | 0.8341 | | No log | 7.8276 | 454 | 0.6907 | 0.6025 | 0.6907 | 0.8311 | | No log | 7.8621 | 456 | 0.6914 | 0.5535 | 0.6914 | 0.8315 | | No log | 7.8966 | 458 | 0.6916 | 0.5430 | 0.6916 | 0.8316 | | No log | 7.9310 | 460 | 0.6841 | 0.5427 | 0.6841 | 0.8271 | | No log | 7.9655 | 462 | 0.6774 | 0.5736 | 0.6774 | 0.8230 | | No log | 8.0 | 464 | 0.6771 | 0.5534 | 0.6771 | 0.8229 | | No log | 8.0345 | 466 | 0.6808 | 0.5611 | 0.6808 | 0.8251 | | No log | 8.0690 | 468 | 0.6748 | 0.5566 | 0.6748 | 0.8215 | | No log | 8.1034 | 470 | 0.6737 | 0.5566 | 0.6737 | 0.8208 | | No log | 8.1379 | 472 | 0.6750 | 0.5582 | 0.6750 | 0.8216 | | No log | 8.1724 | 474 | 0.6801 | 0.5827 | 0.6801 | 0.8247 | | No log | 8.2069 | 476 | 0.6858 | 0.5827 | 0.6858 | 0.8281 | | No log | 8.2414 | 478 | 0.6923 | 0.5569 | 0.6923 | 0.8321 | | No log | 8.2759 | 480 | 0.6926 | 0.5827 | 0.6926 | 0.8322 | | No log | 8.3103 | 482 | 0.6892 | 0.5827 | 0.6892 | 0.8302 | | No log | 8.3448 | 484 | 0.6894 | 0.5521 | 0.6894 | 0.8303 | | No log | 8.3793 | 486 | 0.6862 | 0.5628 | 0.6862 | 0.8284 | | No log | 8.4138 | 488 | 0.6878 | 0.5358 | 0.6878 | 0.8293 | | No log | 8.4483 | 490 | 0.6928 | 0.5023 | 0.6928 | 0.8324 | | No log | 8.4828 | 492 | 0.6861 | 0.5033 | 0.6861 | 0.8283 | | No log | 8.5172 | 494 | 0.6859 | 0.5023 | 0.6859 | 0.8282 | | No log | 8.5517 | 496 | 0.6781 | 0.5033 | 0.6781 | 0.8235 | | No log | 8.5862 | 498 | 0.6756 | 0.5267 | 0.6756 | 0.8219 | | 0.3804 | 8.6207 | 500 | 0.6758 | 0.5295 | 0.6758 | 0.8221 | | 0.3804 | 8.6552 | 502 | 0.6765 | 0.5644 | 0.6765 | 0.8225 | | 0.3804 | 8.6897 | 504 | 0.6803 | 0.5582 | 0.6803 | 0.8248 | | 0.3804 | 8.7241 | 506 | 0.6835 | 0.5733 | 0.6835 | 0.8268 | | 0.3804 | 8.7586 | 508 | 0.6817 | 0.5736 | 0.6817 | 0.8256 | | 0.3804 | 8.7931 | 510 | 0.6809 | 0.5733 | 0.6809 | 0.8252 | | 0.3804 | 8.8276 | 512 | 0.6858 | 0.5690 | 0.6858 | 0.8281 | | 0.3804 | 8.8621 | 514 | 0.6914 | 0.5629 | 0.6914 | 0.8315 | | 0.3804 | 8.8966 | 516 | 0.6940 | 0.5629 | 0.6940 | 0.8331 | | 0.3804 | 8.9310 | 518 | 0.6891 | 0.5690 | 0.6891 | 0.8301 | | 0.3804 | 8.9655 | 520 | 0.6857 | 0.5690 | 0.6857 | 0.8281 | | 0.3804 | 9.0 | 522 | 0.6803 | 0.5736 | 0.6803 | 0.8248 | | 0.3804 | 9.0345 | 524 | 0.6787 | 0.5566 | 0.6787 | 0.8238 | | 0.3804 | 9.0690 | 526 | 0.6806 | 0.5566 | 0.6806 | 0.8250 | | 0.3804 | 9.1034 | 528 | 0.6816 | 0.5566 | 0.6816 | 0.8256 | | 0.3804 | 9.1379 | 530 | 0.6854 | 0.5566 | 0.6854 | 0.8279 | | 0.3804 | 9.1724 | 532 | 0.6858 | 0.5566 | 0.6858 | 0.8282 | | 0.3804 | 9.2069 | 534 | 0.6857 | 0.5566 | 0.6857 | 0.8281 | | 0.3804 | 9.2414 | 536 | 0.6857 | 0.5551 | 0.6857 | 0.8281 | | 0.3804 | 9.2759 | 538 | 0.6877 | 0.5551 | 0.6877 | 0.8293 | | 0.3804 | 9.3103 | 540 | 0.6894 | 0.5551 | 0.6894 | 0.8303 | | 0.3804 | 9.3448 | 542 | 0.6933 | 0.5360 | 0.6933 | 0.8326 | | 0.3804 | 9.3793 | 544 | 0.6966 | 0.5360 | 0.6966 | 0.8347 | | 0.3804 | 9.4138 | 546 | 0.6997 | 0.5551 | 0.6997 | 0.8365 | | 0.3804 | 9.4483 | 548 | 0.7023 | 0.5535 | 0.7023 | 0.8380 | | 0.3804 | 9.4828 | 550 | 0.7047 | 0.5579 | 0.7047 | 0.8395 | | 0.3804 | 9.5172 | 552 | 0.7081 | 0.5358 | 0.7081 | 0.8415 | | 0.3804 | 9.5517 | 554 | 0.7128 | 0.5416 | 0.7128 | 0.8443 | | 0.3804 | 9.5862 | 556 | 0.7184 | 0.5403 | 0.7184 | 0.8476 | | 0.3804 | 9.6207 | 558 | 0.7242 | 0.5390 | 0.7242 | 0.8510 | | 0.3804 | 9.6552 | 560 | 0.7280 | 0.5390 | 0.7280 | 0.8532 | | 0.3804 | 9.6897 | 562 | 0.7313 | 0.5390 | 0.7313 | 0.8552 | | 0.3804 | 9.7241 | 564 | 0.7339 | 0.5390 | 0.7339 | 0.8567 | | 0.3804 | 9.7586 | 566 | 0.7364 | 0.5390 | 0.7364 | 0.8581 | | 0.3804 | 9.7931 | 568 | 0.7368 | 0.5336 | 0.7368 | 0.8584 | | 0.3804 | 9.8276 | 570 | 0.7378 | 0.5336 | 0.7378 | 0.8590 | | 0.3804 | 9.8621 | 572 | 0.7378 | 0.5336 | 0.7378 | 0.8590 | | 0.3804 | 9.8966 | 574 | 0.7375 | 0.5292 | 0.7375 | 0.8588 | | 0.3804 | 9.9310 | 576 | 0.7371 | 0.5296 | 0.7371 | 0.8586 | | 0.3804 | 9.9655 | 578 | 0.7368 | 0.5296 | 0.7368 | 0.8584 | | 0.3804 | 10.0 | 580 | 0.7367 | 0.5296 | 0.7367 | 0.8583 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
vicgalle/NeuralBeagle-11B
vicgalle
"2024-03-04T12:11:40Z"
52
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "dpo", "11B", "merge", "conversational", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "base_model:vicgalle/franken-Beagle-11B", "base_model:finetune:vicgalle/franken-Beagle-11B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-20T10:01:58Z"
--- license: apache-2.0 tags: - dpo - 11B - merge datasets: - argilla/distilabel-intel-orca-dpo-pairs base_model: - vicgalle/franken-Beagle-11B model-index: - name: NeuralBeagle-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.61 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.36 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/NeuralBeagle-11B name: Open LLM Leaderboard --- # NeuralBeagle-11B DPO'd from vicgalle/franken-Beagle-11B, a Beagle-like model upscaled to 11B. It is a frankenmerge model created using mergekit. Then, we applied DPO over a high-quality preference dataset. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/6u4L-v7GHZWSJq2CT40TS.png) ## Sample generations **Generate a movie review about the Pokémon 1 movie, focusing on BDSM.** > Before delving into the realm of BDSOMatic interpretations within Pokémon: The First Movie (aka "Pokémon: Mewtwo's Revenge" in Japan), it is important to acknowledge that this animated feature is primarily intended for a younger audience. It follows the adventures of Ash Ketchum, Pikachu, and their friends as they encounter an artificially created Pokémon named Mewtwo. While certain aspects of the storyline and visuals could be stretched to include various subtexts or symbolism, our analysis will focus on a more playful and metaphorical exploration of BDSM concepts within this film's context. Participants: Warning: The following review may contain light-hearted and imaginative interpretations that are not intended as serious critical analysis and should not be taken literally. Within the Pokémon universe, Ash Ketchum and his loyal Pikachu find themselves thrust into a high-stakes scenario reminiscent of a BDSM "Domestic Discipline" (DD) fantasy. As Mewtwo, a genetically modified Pokémon with immense power and intellect, seeks vengeance against the scientists who created him as a tool for battle entertainment, he operates as an archetypal "Dominant" figure. Mewtwo, born from scientific manipulation, is an embodiment of control, both over his own creation and his obedient army of Pokémon. He exercises mastery over his subjects, mirroring the role of a "Dom," or Dominant, in a BDSM dynamic. Ash Ketchum, initially unaware of the manipulations that have brought him into this situation, can be seen as a submissive protagonist. His humility and naivety parallel those within the DD community who seek guidance from an experienced Dominant to help them understand and embrace their desires for submission. In his encounter with Mewtwo, Ash is subjected to tests designed to exhibit his strength, loyalty, and unwavering determination - elements crucial in establishing trust between a submissive and a Dominant within a BDSM dynamic. This psychological battle symbolizes the process of earning a "sub's" consent and trust through displays of control and power. ## Prompt format The template is included in tokenizer_config.json. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__NeuralBeagle-11B) | Metric |Value| |---------------------------------|----:| |Avg. |72.95| |AI2 Reasoning Challenge (25-Shot)|73.29| |HellaSwag (10-Shot) |87.61| |MMLU (5-Shot) |63.80| |TruthfulQA (0-shot) |71.36| |Winogrande (5-shot) |82.64| |GSM8k (5-shot) |58.98|
tareshh18/legal-temp
tareshh18
"2025-03-16T06:57:14Z"
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-16T06:39:20Z"
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** tareshh18 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sriyaseshadri/gemma-essay-finetune
sriyaseshadri
"2024-04-30T17:22:46Z"
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-30T16:13:03Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nathanialhunt2000/60a2472d-de0c-4b7f-9119-d23ed5ce963e
nathanialhunt2000
"2025-03-09T04:52:47Z"
0
0
peft
[ "peft", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "region:us" ]
null
"2025-03-09T04:52:32Z"
--- library_name: peft tags: - generated_from_trainer base_model: deepseek-ai/deepseek-coder-6.7b-instruct model-index: - name: nathanialhunt2000/60a2472d-de0c-4b7f-9119-d23ed5ce963e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nathanialhunt2000/60a2472d-de0c-4b7f-9119-d23ed5ce963e This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1023 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
coffiee/dl200
coffiee
"2025-02-17T03:35:21Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-17T03:34:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AppyFizz/caluepicr
AppyFizz
"2024-10-27T11:02:14Z"
6
0
diffusers
[ "diffusers", "autotrain", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:emilianJR/epiCRealism", "base_model:adapter:emilianJR/epiCRealism", "license:openrail++", "region:us" ]
text-to-image
"2024-10-27T11:02:12Z"
--- tags: - autotrain - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: emilianJR/epiCRealism instance_prompt: photo of caluepicr license: openrail++ --- # AutoTrain LoRA DreamBooth - AppyFizz/caluepicr These are LoRA adaption weights for emilianJR/epiCRealism. The weights were trained on photo of caluepicr using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True.
roncmic/distilbert-base-uncased-finetuned-ner
roncmic
"2024-10-10T12:35:52Z"
127
0
transformers
[ "transformers", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-10-10T12:31:20Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9258481600176816 - name: Recall type: recall value: 0.9372413021590782 - name: F1 type: f1 value: 0.9315098954858795 - name: Accuracy type: accuracy value: 0.983668800737128 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Precision: 0.9258 - Recall: 0.9372 - F1: 0.9315 - Accuracy: 0.9837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2412 | 1.0 | 878 | 0.0686 | 0.9041 | 0.9249 | 0.9144 | 0.9803 | | 0.0519 | 2.0 | 1756 | 0.0596 | 0.9236 | 0.9339 | 0.9287 | 0.9831 | | 0.0298 | 3.0 | 2634 | 0.0606 | 0.9258 | 0.9372 | 0.9315 | 0.9837 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1 - Datasets 2.18.0 - Tokenizers 0.20.0
AltamashAhmed/distilHuBERT-finetuned-gtzan
AltamashAhmed
"2023-09-14T23:06:13Z"
169
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
"2023-08-28T19:31:13Z"
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: DistilHuBERT-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.85 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DistilHuBERT-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5657 - Accuracy: 0.85 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9277 | 1.0 | 113 | 1.7812 | 0.41 | | 1.3526 | 2.0 | 226 | 1.1616 | 0.67 | | 1.043 | 3.0 | 339 | 0.8962 | 0.78 | | 0.8494 | 4.0 | 452 | 0.8234 | 0.76 | | 0.5931 | 5.0 | 565 | 0.7085 | 0.78 | | 0.3861 | 6.0 | 678 | 0.6669 | 0.8 | | 0.4828 | 7.0 | 791 | 0.5674 | 0.86 | | 0.2302 | 8.0 | 904 | 0.5657 | 0.85 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
junn991/gemma2-2b-it-sft-couple
junn991
"2024-11-21T14:55:01Z"
60
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-21T14:52:10Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wavespeed/Wan2.1-I2V-14B-720P-Diffusers-fp16
wavespeed
"2025-03-30T07:48:19Z"
0
0
diffusers
[ "diffusers", "safetensors", "diffusers:WanImageToVideoPipeline", "region:us" ]
null
"2025-03-30T05:36:16Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF
mradermacher
"2024-12-02T22:00:10Z"
65
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:win10/EVA-Instruct-QwQ-32B-Preview", "base_model:quantized:win10/EVA-Instruct-QwQ-32B-Preview", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2024-12-01T20:37:42Z"
--- base_model: win10/EVA-Instruct-QwQ-32B-Preview language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/win10/EVA-Instruct-QwQ-32B-Preview <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Instruct-QwQ-32B-Preview-i1-GGUF/resolve/main/EVA-Instruct-QwQ-32B-Preview.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
nl-to-logql/gemma-2-logql
nl-to-logql
"2024-11-13T03:59:48Z"
51
0
peft
[ "peft", "safetensors", "gemma2", "generated_from_trainer", "base_model:google/gemma-2-9b", "base_model:adapter:google/gemma-2-9b", "license:gemma", "8-bit", "bitsandbytes", "region:us" ]
null
"2024-11-13T03:45:50Z"
--- base_model: google/gemma-2-9b library_name: peft license: gemma tags: - generated_from_trainer model-index: - name: lora-out results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: google/gemma-2-9b sequence_len: 1024 # base model weight quantization load_in_8bit: true # load_in_4bit: true # attention implementation flash_attention: true # finetuned adapter config adapter: lora lora_model_dir: lora_r: 16 lora_alpha: 32 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: lora_modules_to_save: # required when adding new tokens to LLaMA/Mistral - embed_tokens - lm_head # if training fails, uncomment above # for details, see https://github.com/huggingface/peft/issues/334#issuecomment-1561727994 ### # Dataset Configuration: sqlqa ### # datasets: # - path: data.jsonl # type: alpaca datasets: - path: public_train_data.jsonl ds_type: json type: field_instruction: instruction field_input: input field_output: output format: |- [INST] {instruction} {input} [/INST] chat_template: gemma tokens: - "[INST]" - " [/INST]" - "[QL]" - " [/QL]" - "[EXPLANATION]" - " [/EXPLANATION]" # dataset formatting config special_tokens: pad_token: <|end_of_text|> val_set_size: 0.05 ### # Training Configuration ### # masks the input messages so that the model learns and understands the language w/o being reliant on the input train_on_inputs: false # random seed for better reproducibility seed: 117 # optimizer config optimizer: adamw_bnb_8bit learning_rate: 0.0001 lr_scheduler: cosine num_epochs: 4 micro_batch_size: 4 gradient_accumulation_steps: 1 warmup_steps: 10 # axolotl saving config dataset_prepared_path: last_run_prepared output_dir: ./lora-out # logging and eval config logging_steps: 1 eval_steps: 0.05 # training performance optimization config bf16: auto tf32: false gradient_checkpointing: true ### # Miscellaneous Configuration ### # when true, prevents over-writing the config from the CLI strict: false # "Don't mess with this, it's here for accelerate and torchrun" -- axolotl docs local_rank: # WANDB wandb_mode: wandb_project: wandb_watch: wandb_name: wandb_run_id: # Multi-GPU # deepspeed: /root/axolotl/deepspeed_configs/zero3_bf16.json # deepspeed: zero3_bf16.json # deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json deepspeed: fsdp: fsdp_config: ``` </details><br> # lora-out This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0077 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 117 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7925 | 0.0385 | 1 | 2.0412 | | 1.6872 | 0.2308 | 6 | 1.6089 | | 0.6967 | 0.4615 | 12 | 0.6328 | | 0.3327 | 0.6923 | 18 | 0.2711 | | 0.1784 | 0.9231 | 24 | 0.1733 | | 0.1136 | 1.1538 | 30 | 0.1190 | | 0.0891 | 1.3846 | 36 | 0.0850 | | 0.0746 | 1.6154 | 42 | 0.0626 | | 0.0522 | 1.8462 | 48 | 0.0465 | | 0.033 | 2.0769 | 54 | 0.0282 | | 0.0333 | 2.3077 | 60 | 0.0225 | | 0.0171 | 2.5385 | 66 | 0.0203 | | 0.0172 | 2.7692 | 72 | 0.0144 | | 0.0095 | 3.0 | 78 | 0.0119 | | 0.0088 | 3.2308 | 84 | 0.0099 | | 0.0054 | 3.4615 | 90 | 0.0089 | | 0.0073 | 3.6923 | 96 | 0.0085 | | 0.0059 | 3.9231 | 102 | 0.0077 | ### Framework versions - PEFT 0.13.0 - Transformers 4.45.1 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.20.0
HZ0504/kajimi
HZ0504
"2023-08-23T09:15:37Z"
1
2
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
"2023-08-23T08:20:20Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of a kajimi tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
gian-cr/q-FrozenLake-v1-4x4-noSlippery
gian-cr
"2023-03-07T16:55:37Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-28T18:17:57Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="gian-cr/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
TOMFORD79/HI_VO1_H04
TOMFORD79
"2025-03-12T18:04:37Z"
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2025-03-12T17:57:56Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
RedRayz/hikari_noob_v-pred_0.6
RedRayz
"2024-11-20T10:22:48Z"
10
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "sdxl", "anime", "text-to-image", "en", "base_model:RedRayz/hikari_noob_v-pred_0.5", "base_model:finetune:RedRayz/hikari_noob_v-pred_0.5", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-11-14T15:48:16Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ base_model: - RedRayz/hikari_noob_v-pred_0.5 language: - en tags: - stable-diffusion - sdxl - anime pipeline_tag: text-to-image --- # Hikari Noob v-pred 0.6 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/630e2d981ef92d4e37a1694e/MHvvnPE5JFD2GVK1OGhlS.jpeg) Civitai model page: https://civitai.com/models/938672 Fine-tuned NoobAI-XL(ν-prediction) and merged SPO LoRA NoobAI-XL(ν-prediction)をファインチューンし、SPOをマージしました。 ## Features/特徴 - Improved stability and quality. - Works with samplers other than Euler. - Good results with only 10 steps (12 steps or more recommended) - Fixed a problem in which the quality of output was significantly degraded when the number of tokens exceeded 76. - The base style is not strong and can be restyled by prompts or LoRAs. - This model does not include any base model other than NoobAI (ν-prediction 0.5), so it has the equivalent knowledge. You can generate characters that have appeared by August 2024. - 安定性と品質を改善 - わずか10ステップでよい結果を得られます(ただし12ステップ以上を推奨) - Zero Terminal SNRの代わりにNoise Offsetを使用することでEuler以外のサンプラーでも利用できるようにしました。 - トークン数が76を超えると出力の品質が著しく低下する問題を修正しました。 - 素の画風は強くないので、プロンプトやLoRAによる画風変更ができます。 - このモデルはNoobAI(ν-prediction 0.5)以外のベースモデルを一切含まず、それと同等の知識があります。2024年8月までに登場したキャラクターを生成できます。 ## About v0.6 - v0.5 was too flat, so we improved the quality of detail rendering and made outline thinner! - Improved the quality of the landscapes just a little bit - Reduced skin color overshoot? - v0.5は絵柄が平坦すぎたのでディティールを増やしてアウトラインを細くしました。 - ほんの少しだけ風景画の品質が改善 - 肌の色のオーバーシュート(白飛び)を軽減? ## Requirements / 動作要件 - AUTOMATIC1111 WebUI on `dev` branch / devブランチ上のAUTOMATIC1111 WebUI - **Latest version** of ComfyUI / **最新版**のComfyUI - **Latest version** of Forge or reForge / **最新版**のForgeまたはreForge ### Instruction for AUTOMATIC1111 / AUTOMATIC1111の導入手順 1. Switch branch to `dev` (Run this command in the root directory of the webui: `git checkout -b dev origin/dev` or use Github Desktop) 2. Use the model as usual! (日本語) 1. `dev`ブランチに切り替えます(次のコマンドをwebui直下で実行します: `git checkout -b dev origin/dev` またはGithub Desktopを使う) 2. 通常通りモデルを使用します。 ## Prompt Guidelines / プロンプト記法 Almost same as the base model/ベースモデルとおおむね同じ To improve the quality of background, add `simple background, transparent background` to Negative Prompt. ## Recommended Prompt / 推奨プロンプト Positive: None/無し(Works good without `masterpiece, best quality` / `masterpiece, best quality`無しでおk) Negative: `worst quality, low quality, bad quality, lowres, jpeg artifacts, unfinished, photoshop \(medium\), abstract` or empty(または無し) ## Recommended Settings / 推奨設定 Steps: 8-24 Sampler: DPM++ 2M(dpmpp_2m) Scheduler: Simple Guidance Scale: 2-7 ### Hires.fix Hires upscaler: 4x-UltraSharp or Latent(nearest-exact) Denoising strength: 0.4-0.5(0.65-0.7 for latent) ## Merge recipe(Weighted sum) - Stage 1: Finetune Hikari Noob v-pred 0.5 and merge(see below) *A,B: Hikari Noob v-pred 0.5 based custom checkpoint - v0.5(NoSPO) * 0.75 + A * 0.25 = tmp1 - tmp0 * 0.75 + B * 0.25 = tmp2 - tmp2 + SPO LoRA * 1 + sdxl-flat * -0.25 + sdxl-boldline * -1 = tmp3 - Adjust tmp3(0.2,0.2,0.2,0.1,0,0,0,0) = Result ## Training scripts: [sd-scripts](https://github.com/kohya-ss/sd-scripts) ## Notice This model is licensed under [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) If you make modify this model, you must share both your changes and the original license. You are prohibited from monetizing any close-sourced fine-tuned / merged model, which disallows the public from accessing the model's source code / weights and its usages. Do not reupload this model.
Mekuu/LLAMA3.1-8b-Counsel-v1.0
Mekuu
"2025-04-01T08:52:28Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-01T08:47:10Z"
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Mekuu - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
novelmartis/blt_vs_model
novelmartis
"2025-02-26T08:22:08Z"
0
1
null
[ "license:mit", "region:us" ]
null
"2024-11-26T20:31:27Z"
--- license: mit --- Two versions of the BLT-VS model (see: https://github.com/KietzmannLab/BLT-VS) are available. Model 1: blt_vs_slt_111_biounroll_1_t_12_readout_multi_dataset_ecoset_num_1 (trained on Imagenet-1k; ILSVRC) timesteps=12, num_classes=1000, add_feats=100, lateral_connections=True, topdown_connections=True, skip_connections=True, bio_unroll=True, image_size=224, hook_type='None', readout_type='multi' Model 2: blt_vs_slt_111_biounroll_0_t_6_readout_multi_dataset_imagenet_num_1 (trained on Ecoset-565 - https://huggingface.co/datasets/kietzmannlab/ecoset) timesteps=6, num_classes=565, add_feats=100, lateral_connections=True, topdown_connections=True, skip_connections=True, bio_unroll=False, image_size=224, hook_type='None', readout_type='multi' Additionally, the b_vs, vNet, and rn50 trained models are also available (see https://github.com/KietzmannLab/BLT-VS/tree/main/blt_vs_model/training_code/models for model definitions).
sakusakumura/Qwen2-7b-cleanup-short-prompt
sakusakumura
"2024-06-10T03:41:54Z"
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2-7B", "base_model:finetune:unsloth/Qwen2-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-10T03:37:40Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft base_model: unsloth/Qwen2-7B --- # Uploaded model - **Developed by:** sakusakumura - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-7B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
isspek/roberta-base_zika_chatgpt_1_2e-5_16_undersampling_0.5
isspek
"2024-12-04T12:43:42Z"
198
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-04T12:43:23Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF
mradermacher
"2024-10-11T18:51:02Z"
154
2
transformers
[ "transformers", "gguf", "en", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Nopm/Opus_WritingStruct", "dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned", "dataset:Gryphe/Sonnet3.5-Charcard-Roleplay", "dataset:Gryphe/ChatGPT-4o-Writing-Prompts", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts", "dataset:allura-org/Celeste-1.x-data-mixture", "dataset:allura-org/shortstories_synthlabels", "base_model:EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1", "base_model:quantized:EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-10-07T12:57:56Z"
--- base_model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1 datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture - allura-org/shortstories_synthlabels language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.IQ3_XS.gguf) | IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.IQ3_S.gguf) | IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.IQ3_M.gguf) | IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/EVA-Qwen2.5-14B-v0.1-GGUF/resolve/main/EVA-Qwen2.5-14B-v0.1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Ebitendon/ppo-LunarLander-v2
Ebitendon
"2023-09-04T03:40:36Z"
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-09-03T11:06:59Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.19 +/- 65.19 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mradermacher/merging_LLM-GGUF
mradermacher
"2025-02-14T04:32:06Z"
88
1
transformers
[ "transformers", "gguf", "en", "base_model:MatteoKhan/merging_LLM", "base_model:quantized:MatteoKhan/merging_LLM", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-01-03T03:35:12Z"
--- base_model: MatteoKhan/merging_LLM language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/MatteoKhan/merging_LLM <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/merging_LLM-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/merging_LLM-GGUF/resolve/main/merging_LLM.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
atasoglu/mbert-base-cased-nli-stsb-tr
atasoglu
"2024-04-20T18:49:12Z"
23
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "tr", "dataset:nli_tr", "dataset:emrecan/stsb-mt-turkish", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-04-20T18:44:50Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 datasets: - nli_tr - emrecan/stsb-mt-turkish language: - tr --- # atasoglu/mbert-base-cased-nli-stsb-tr This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model was adapted from [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) and fine-tuned on these datasets: - [nli_tr](https://huggingface.co/datasets/nli_tr) - [emrecan/stsb-mt-turkish](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('atasoglu/mbert-base-cased-nli-stsb-tr') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('atasoglu/mbert-base-cased-nli-stsb-tr') model = AutoModel.from_pretrained('atasoglu/mbert-base-cased-nli-stsb-tr') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results Achieved results on the [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) test split are given below: ```txt Cosine-Similarity : Pearson: 0.8152 Spearman: 0.8130 Manhattan-Distance: Pearson: 0.8049 Spearman: 0.8128 Euclidean-Distance: Pearson: 0.8049 Spearman: 0.8126 Dot-Product-Similarity: Pearson: 0.7878 Spearman: 0.7822 ``` ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 180 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 18, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 108, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
tcapelle/toxicity-scorer-smollm2-360m-freeze
tcapelle
"2024-11-28T12:59:01Z"
7
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "generated_from_trainer", "base_model:HuggingFaceTB/SmolLM2-360M", "base_model:finetune:HuggingFaceTB/SmolLM2-360M", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-11-27T21:39:57Z"
--- library_name: transformers license: apache-2.0 base_model: HuggingFaceTB/SmolLM2-360M tags: - generated_from_trainer metrics: - f1 - accuracy - precision - recall model-index: - name: toxicity-scorer-smollm2-360m-freeze results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # toxicity-scorer-smollm2-360m-freeze This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7434 - F1: 0.6049 - Accuracy: 0.5261 - Precision: 0.7390 - Recall: 0.5261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 44 - eval_batch_size: 44 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 352 - total_eval_batch_size: 352 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:------:|:--------:|:---------:|:------:| | No log | 0 | 0 | 0.7481 | 0.6025 | 0.5231 | 0.7383 | 0.5231 | | 0.7489 | 1.5277 | 5000 | 0.7434 | 0.6049 | 0.5261 | 0.7390 | 0.5261 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
SunilNiloor/q-Taxi-v3-rgb_array
SunilNiloor
"2023-08-27T15:08:34Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-08-27T15:08:33Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-rgb_array results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="SunilNiloor/q-Taxi-v3-rgb_array", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mradermacher/Multimash3-12B-slerp-GGUF
mradermacher
"2024-05-22T19:23:25Z"
4
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "allknowingroger/Multimerge-12B-MoE", "TomGrc/FusionNet_7Bx2_MoE_v0.1", "en", "base_model:allknowingroger/Multimash3-12B-slerp", "base_model:quantized:allknowingroger/Multimash3-12B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-05-22T18:37:58Z"
--- base_model: allknowingroger/Multimash3-12B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - allknowingroger/Multimerge-12B-MoE - TomGrc/FusionNet_7Bx2_MoE_v0.1 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/allknowingroger/Multimash3-12B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Multimash3-12B-slerp-GGUF/resolve/main/Multimash3-12B-slerp.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
cleanrl/Riverraid-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
"2023-02-10T13:15:36Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "Riverraid-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-10T13:15:31Z"
--- tags: - Riverraid-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Riverraid-v5 type: Riverraid-v5 metrics: - type: mean_reward value: 31214.00 +/- 3211.31 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Riverraid-v5** This is a trained model of a PPO agent playing Riverraid-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Riverraid-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Riverraid-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Riverraid-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
nttx/9ab10f77-de16-4060-8365-3e4839cf5078
nttx
"2025-01-24T21:02:03Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B", "license:llama3", "region:us" ]
null
"2025-01-24T20:03:16Z"
--- library_name: peft license: llama3 base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B tags: - axolotl - generated_from_trainer model-index: - name: 9ab10f77-de16-4060-8365-3e4839cf5078 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - c2aac3d186ebb63e_train_data.json ds_type: json format: custom path: /workspace/input_data/c2aac3d186ebb63e_train_data.json type: field_input: label field_instruction: abstract field_output: title format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: nttx/9ab10f77-de16-4060-8365-3e4839cf5078 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/c2aac3d186ebb63e_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 2b5a20e0-77e3-4a95-907e-822cd4e60b86 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 2b5a20e0-77e3-4a95-907e-822cd4e60b86 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 9ab10f77-de16-4060-8365-3e4839cf5078 This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.7416 | 0.0003 | 1 | 3.3045 | | 1.2416 | 0.0169 | 50 | 1.3204 | | 1.4263 | 0.0337 | 100 | 1.2893 | | 1.3587 | 0.0506 | 150 | 1.2685 | | 1.1875 | 0.0674 | 200 | 1.2623 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
yassin99/rubertForRussian
yassin99
"2023-01-11T14:56:00Z"
105
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "ru", "arxiv:1905.07213", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
"2023-01-11T13:44:47Z"
--- language: - ru --- # rubert-base-cased RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\[1\]. 08.11.2021: upload model with MLM and NSP heads \[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).
therealchefdave/slumber-7b
therealchefdave
"2024-03-14T00:26:52Z"
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2203.05482", "base_model:princeton-nlp/SWE-Llama-7b", "base_model:merge:princeton-nlp/SWE-Llama-7b", "base_model:togethercomputer/LLaMA-2-7B-32K", "base_model:merge:togethercomputer/LLaMA-2-7B-32K", "base_model:vibhorag101/llama-2-7b-chat-hf-phr_mental_therapy", "base_model:merge:vibhorag101/llama-2-7b-chat-hf-phr_mental_therapy", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-13T23:52:26Z"
--- base_model: - princeton-nlp/SWE-Llama-7b - vibhorag101/llama-2-7b-chat-hf-phr_mental_therapy - togethercomputer/LLaMA-2-7B-32K library_name: transformers tags: - mergekit - merge --- # dave This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [princeton-nlp/SWE-Llama-7b](https://huggingface.co/princeton-nlp/SWE-Llama-7b) * [vibhorag101/llama-2-7b-chat-hf-phr_mental_therapy](https://huggingface.co/vibhorag101/llama-2-7b-chat-hf-phr_mental_therapy) * [togethercomputer/LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: togethercomputer/LLaMA-2-7B-32K parameters: weight: 1.0 - model: vibhorag101/llama-2-7b-chat-hf-phr_mental_therapy parameters: weight: 0.3 - model: princeton-nlp/SWE-Llama-7b parameters: weight: 0.5 merge_method: linear dtype: float16 ```
Alphatao/b35eda51-b72a-4fa3-8e0f-cf291990bf61
Alphatao
"2025-03-27T23:21:59Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6", "license:apache-2.0", "region:us" ]
null
"2025-03-27T20:29:11Z"
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6 tags: - axolotl - generated_from_trainer model-index: - name: b35eda51-b72a-4fa3-8e0f-cf291990bf61 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6 bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - a9d4324f1e33f9ae_train_data.json ds_type: json format: custom path: /workspace/input_data/a9d4324f1e33f9ae_train_data.json type: field_instruction: init_prompt field_output: init_response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null device_map: ? '' : 0,1,2,3,4,5,6,7 early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 400 eval_table_size: null flash_attention: true gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: Alphatao/b35eda51-b72a-4fa3-8e0f-cf291990bf61 hub_repo: null hub_strategy: null hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj - o_proj - down_proj - up_proj lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 7785 micro_batch_size: 2 mlflow_experiment_name: /tmp/a9d4324f1e33f9ae_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 400 sequence_len: 2048 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.04 wandb_entity: null wandb_mode: online wandb_name: 2a9710ae-957a-477d-91f9-0a8bce2d110f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 2a9710ae-957a-477d-91f9-0a8bce2d110f warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # b35eda51-b72a-4fa3-8e0f-cf291990bf61 This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3703 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 7785 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.3216 | 0.0002 | 1 | 1.3114 | | 0.6908 | 0.0764 | 400 | 0.6985 | | 0.7019 | 0.1527 | 800 | 0.6477 | | 0.7311 | 0.2291 | 1200 | 0.6106 | | 0.5303 | 0.3055 | 1600 | 0.5805 | | 0.5387 | 0.3818 | 2000 | 0.5524 | | 0.4918 | 0.4582 | 2400 | 0.5240 | | 0.4263 | 0.5346 | 2800 | 0.5054 | | 0.3871 | 0.6109 | 3200 | 0.4860 | | 0.4889 | 0.6873 | 3600 | 0.4665 | | 0.5825 | 0.7637 | 4000 | 0.4460 | | 0.2564 | 0.8400 | 4400 | 0.4248 | | 0.3898 | 0.9164 | 4800 | 0.4101 | | 0.5203 | 0.9927 | 5200 | 0.3958 | | 0.3845 | 1.0691 | 5600 | 0.3878 | | 0.4146 | 1.1455 | 6000 | 0.3814 | | 0.1652 | 1.2218 | 6400 | 0.3756 | | 0.3675 | 1.2982 | 6800 | 0.3727 | | 0.3705 | 1.3746 | 7200 | 0.3708 | | 0.22 | 1.4509 | 7600 | 0.3703 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
TheBloke/Solar-10.7B-SLERP-GGUF
TheBloke
"2023-12-14T19:21:11Z"
155
14
transformers
[ "transformers", "gguf", "solar", "en", "base_model:jan-hq/Solar-10.7B-SLERP", "base_model:quantized:jan-hq/Solar-10.7B-SLERP", "license:apache-2.0", "region:us", "conversational" ]
null
"2023-12-14T17:57:10Z"
--- base_model: janhq/Solar-10.7B-SLERP inference: false language: - en license: apache-2.0 model_creator: Jan model_name: Solar 10.7B SLERP model_type: solar prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Solar 10.7B SLERP - GGUF - Model creator: [Jan](https://huggingface.co/janhq) - Original model: [Solar 10.7B SLERP](https://huggingface.co/janhq/Solar-10.7B-SLERP) <!-- description start --> ## Description This repo contains GGUF format model files for [Jan's Solar 10.7B SLERP](https://huggingface.co/janhq/Solar-10.7B-SLERP). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF) * [Jan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/janhq/Solar-10.7B-SLERP) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [solar-10.7b-slerp.Q2_K.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q2_K.gguf) | Q2_K | 2 | 4.55 GB| 7.05 GB | smallest, significant quality loss - not recommended for most purposes | | [solar-10.7b-slerp.Q3_K_S.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q3_K_S.gguf) | Q3_K_S | 3 | 4.67 GB| 7.17 GB | very small, high quality loss | | [solar-10.7b-slerp.Q3_K_M.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q3_K_M.gguf) | Q3_K_M | 3 | 5.19 GB| 7.69 GB | very small, high quality loss | | [solar-10.7b-slerp.Q3_K_L.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q3_K_L.gguf) | Q3_K_L | 3 | 5.65 GB| 8.15 GB | small, substantial quality loss | | [solar-10.7b-slerp.Q4_0.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q4_0.gguf) | Q4_0 | 4 | 6.07 GB| 8.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [solar-10.7b-slerp.Q4_K_S.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q4_K_S.gguf) | Q4_K_S | 4 | 6.10 GB| 8.60 GB | small, greater quality loss | | [solar-10.7b-slerp.Q4_K_M.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q4_K_M.gguf) | Q4_K_M | 4 | 6.46 GB| 8.96 GB | medium, balanced quality - recommended | | [solar-10.7b-slerp.Q5_0.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q5_0.gguf) | Q5_0 | 5 | 7.40 GB| 9.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [solar-10.7b-slerp.Q5_K_S.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q5_K_S.gguf) | Q5_K_S | 5 | 7.40 GB| 9.90 GB | large, low quality loss - recommended | | [solar-10.7b-slerp.Q5_K_M.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q5_K_M.gguf) | Q5_K_M | 5 | 7.60 GB| 10.10 GB | large, very low quality loss - recommended | | [solar-10.7b-slerp.Q6_K.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q6_K.gguf) | Q6_K | 6 | 8.81 GB| 11.31 GB | very large, extremely low quality loss | | [solar-10.7b-slerp.Q8_0.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q8_0.gguf) | Q8_0 | 8 | 11.41 GB| 13.91 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Solar-10.7B-SLERP-GGUF and below it, a specific filename to download, such as: solar-10.7b-slerp.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Solar-10.7B-SLERP-GGUF solar-10.7b-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Solar-10.7B-SLERP-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Solar-10.7B-SLERP-GGUF solar-10.7b-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m solar-10.7b-slerp.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./solar-10.7b-slerp.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./solar-10.7b-slerp.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jan's Solar 10.7B SLERP <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a > - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Model Description This model uses the `Slerp` merge method from the best models on 14th Dec on the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard): 1. [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) 2. [janhq/Pandora-v1-10.7B](https://huggingface.co/janhq/Pandora-v1-10.7B) - base model: [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) The yaml config file for this model is here: ```yaml slices: - sources: - model: upstage/SOLAR-10.7B-Instruct-v1.0 layer_range: [0, 48] - model: janhq/Pandora-v1-10.7B layer_range: [0, 48] merge_method: slerp base_model: upstage/SOLAR-10.7B-Instruct-v1.0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # Prompt template - **ChatML** ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` # Run this model You can run this model using [Jan](https://jan.ai/) on Mac, Windows, or Linux. **Jan is an open source, ChatGPT alternative that is:** 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. 🌐 **OpenAI Compatible**: Local server on port ` 1337` with OpenAI compatible endpoints 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) - Please use the [Solar-10.7B-SLERP-GGUF](https://huggingface.co/janhq/Solar-10.7B-SLERP-GGUF) when using on Jan. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # Jan Model Merger This is a test project for merging models. # Open LLM Leaderboard Evaluation Results Detailed results can be found here. | Metric | Value | |-----------------------|---------------------------| | Avg. | ?| | ARC (25-shot) | ? | | HellaSwag (10-shot) | ? | | MMLU (5-shot) | ?| | TruthfulQA (0-shot) | ? | | Winogrande (5-shot) | ? | | GSM8K (5-shot) | ? | # Acknowlegement - [mergekit](https://github.com/cg123/mergekit) - [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md) - [SLERP](https://github.com/Digitous/LLM-SLERP-Merge) - [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) <!-- original-model-card end -->
microsoft/swin-base-patch4-window12-384-in22k
microsoft
"2022-05-16T18:01:06Z"
3,330
1
transformers
[ "transformers", "pytorch", "tf", "swin", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2103.14030", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-21k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Swin Transformer (large-sized model) Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) [Source](https://paperswithcode.com/method/swin-transformer) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, SwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-base-patch4-window12-384-in22k") model = SwinForImageClassification.from_pretrained("microsoft/swin-base-patch4-window12-384-in22k") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-14030, author = {Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo}, title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, journal = {CoRR}, volume = {abs/2103.14030}, year = {2021}, url = {https://arxiv.org/abs/2103.14030}, eprinttype = {arXiv}, eprint = {2103.14030}, timestamp = {Thu, 08 Apr 2021 07:53:26 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
AlignmentResearch/robust_llm_pythia-word-length-1b-niki-ada-v1
AlignmentResearch
"2024-03-20T18:34:56Z"
104
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b-deduped", "base_model:finetune:EleutherAI/pythia-1b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-03-20T18:32:38Z"
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-1b-deduped model-index: - name: robust_llm_pythia-word-length-1b-niki-ada-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-word-length-1b-niki-ada-v1 This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2