modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
Triangle104/Dolphin-R1-Cydonia-v0.3-Q8_0-GGUF
Triangle104
2025-04-25T22:42:53Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:harkov000/Dolphin-R1-Cydonia-v0.3", "base_model:quantized:harkov000/Dolphin-R1-Cydonia-v0.3", "endpoints_compatible", "region:us" ]
null
2025-04-25T22:41:01Z
--- base_model: harkov000/Dolphin-R1-Cydonia-v0.3 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Triangle104/Dolphin-R1-Cydonia-v0.3-Q8_0-GGUF This model was converted to GGUF format from [`harkov000/Dolphin-R1-Cydonia-v0.3`](https://huggingface.co/harkov000/Dolphin-R1-Cydonia-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/harkov000/Dolphin-R1-Cydonia-v0.3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Dolphin-R1-Cydonia-v0.3-Q8_0-GGUF --hf-file dolphin-r1-cydonia-v0.3-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Dolphin-R1-Cydonia-v0.3-Q8_0-GGUF --hf-file dolphin-r1-cydonia-v0.3-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Dolphin-R1-Cydonia-v0.3-Q8_0-GGUF --hf-file dolphin-r1-cydonia-v0.3-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Dolphin-R1-Cydonia-v0.3-Q8_0-GGUF --hf-file dolphin-r1-cydonia-v0.3-q8_0.gguf -c 2048 ```
SergioRayon/whisper-small-es
SergioRayon
2025-04-25T22:22:19Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-25T21:16:41Z
--- library_name: transformers language: - hi license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Hi - Sanchit Gandhi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: es split: None args: 'config: hi, split: test' metrics: - name: Wer type: wer value: 18.18842837851875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hi - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3430 - Wer: 18.1884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 400 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2266 | 0.8 | 100 | 0.3256 | 16.8277 | | 0.0985 | 1.6 | 200 | 0.3276 | 16.6199 | | 0.0404 | 2.4 | 300 | 0.3396 | 16.6926 | | 0.0215 | 3.2 | 400 | 0.3430 | 18.1884 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
aryolotfi/SFT_gsm8k_rho-math-1b-v0.1_epoch_2_global_step_58
aryolotfi
2025-04-25T22:07:15Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T22:06:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF
mradermacher
2025-04-25T21:00:11Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B", "base_model:quantized:ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-04-25T15:48:53Z
--- base_model: ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ReadyArt/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | | | [GGUF](https://huggingface.co/mradermacher/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B-i1-GGUF/resolve/main/Safeword-Abomination-of-Omega-Darker-Gaslight_The-Final-Forgotten-Transgression-24B.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
philipfourie/bi-morse-code-Q4_0-GGUF
philipfourie
2025-04-25T20:42:11Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gemma3_text", "llama-cpp", "gguf-my-repo", "en", "base_model:philipfourie/bi-morse-code", "base_model:quantized:philipfourie/bi-morse-code", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-25T20:42:02Z
--- base_model: philipfourie/bi-morse-code language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma3_text - llama-cpp - gguf-my-repo --- # philipfourie/bi-morse-code-Q4_0-GGUF This model was converted to GGUF format from [`philipfourie/bi-morse-code`](https://huggingface.co/philipfourie/bi-morse-code) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/philipfourie/bi-morse-code) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo philipfourie/bi-morse-code-Q4_0-GGUF --hf-file bi-morse-code-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo philipfourie/bi-morse-code-Q4_0-GGUF --hf-file bi-morse-code-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo philipfourie/bi-morse-code-Q4_0-GGUF --hf-file bi-morse-code-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo philipfourie/bi-morse-code-Q4_0-GGUF --hf-file bi-morse-code-q4_0.gguf -c 2048 ```
SmallDoge/Qwen2.5-14b-math-short25k
SmallDoge
2025-04-25T20:31:08Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:06:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hasdal/259eb8c1-9868-4247-9c33-3fea1d69539e
hasdal
2025-04-25T17:38:25Z
0
0
peft
[ "peft", "safetensors", "gemma2", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2-2b-it", "base_model:adapter:unsloth/gemma-2-2b-it", "license:gemma", "region:us" ]
null
2025-04-25T17:34:10Z
--- library_name: peft license: gemma base_model: unsloth/gemma-2-2b-it tags: - axolotl - generated_from_trainer model-index: - name: 259eb8c1-9868-4247-9c33-3fea1d69539e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2-2b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 107ce7a6d249e695_train_data.json ds_type: json format: custom path: /workspace/input_data/107ce7a6d249e695_train_data.json type: field_input: rejected field_instruction: prompt field_output: chosen format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: hasdal/259eb8c1-9868-4247-9c33-3fea1d69539e hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000208 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_bias: none lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 128 lora_target_linear: true lora_target_modules: - q_proj - k_proj - v_proj - o_proj - gate_proj - up_proj - down_proj lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/107ce7a6d249e695_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: false sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 39e7ca39-6862-435b-82c5-d7850abe012f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 39e7ca39-6862-435b-82c5-d7850abe012f warmup_steps: 10 weight_decay: 0.0 xformers_attention: false ``` </details><br> # 259eb8c1-9868-4247-9c33-3fea1d69539e This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000208 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.5761 | 0.0009 | 1 | 2.2812 | | 2.2025 | 0.0026 | 3 | 2.1962 | | 1.9753 | 0.0051 | 6 | 1.9795 | | 1.5196 | 0.0077 | 9 | 1.8543 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
adarshb3/macro_risk_classifier
adarshb3
2025-04-25T17:27:29Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "en", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-25T17:13:57Z
--- license: apache-2.0 tags: - text-classification - transformers - distilbert pipeline_tag: text-classification language: - en base_model: - distilbert/distilbert-base-uncased --- # Macro Risk Text Classifier This model is a fine-tuned [DistilBERT](https://huggingface.co/distilbert-base-uncased) for macro risk text classification. ## Model Details - **Base model:** distilbert-base-uncased - **Task:** Text Classification ## Usage
spacematt/Nemo-Mojo-12B
spacematt
2025-04-25T17:16:45Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:DreadPoor/YM-12B-Model_Stock", "base_model:merge:DreadPoor/YM-12B-Model_Stock", "base_model:mergekit-community/MN-Nyx-Chthonia-12B", "base_model:merge:mergekit-community/MN-Nyx-Chthonia-12B", "base_model:mistralai/Mistral-Nemo-Instruct-2407", "base_model:merge:mistralai/Mistral-Nemo-Instruct-2407", "base_model:yamatazen/BlueLight-12B", "base_model:merge:yamatazen/BlueLight-12B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T17:08:25Z
--- base_model: - mergekit-community/MN-Nyx-Chthonia-12B - DreadPoor/YM-12B-Model_Stock - mistralai/Mistral-Nemo-Instruct-2407 - yamatazen/BlueLight-12B library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) as a base. ### Models Merged The following models were included in the merge: * [mergekit-community/MN-Nyx-Chthonia-12B](https://huggingface.co/mergekit-community/MN-Nyx-Chthonia-12B) * [DreadPoor/YM-12B-Model_Stock](https://huggingface.co/DreadPoor/YM-12B-Model_Stock) * [yamatazen/BlueLight-12B](https://huggingface.co/yamatazen/BlueLight-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-Nemo-Instruct-2407 - model: mergekit-community/MN-Nyx-Chthonia-12B - model: yamatazen/BlueLight-12B - model: DreadPoor/YM-12B-Model_Stock tokenizer: source: union tokens: "<|im_start|>": source: mergekit-community/MN-Nyx-Chthonia-12B "<|im_end|>": source: mergekit-community/MN-Nyx-Chthonia-12B "[INST]": source: mistralai/Mistral-Nemo-Instruct-2407 "[/INST]": source: mistralai/Mistral-Nemo-Instruct-2407 merge_method: model_stock base_model: mistralai/Mistral-Nemo-Instruct-2407 dtype: bfloat16 out_dtype: bfloat16 chat_template: chatml ```
jpark677/qwen2-vl-7b-instruct-pope-fft-unfreeze-mlp-ep-3-waa-f
jpark677
2025-04-25T17:06:05Z
0
0
null
[ "safetensors", "qwen2_vl", "region:us" ]
null
2025-04-25T17:01:42Z
# qwen2-vl-7b-instruct-pope-fft-unfreeze-mlp-ep-3-waa-f This repository contains the model checkpoint (original iteration 1686) as epoch 3.
robiulawaldev/a02c540f-1aa2-4055-9891-7bea2985c8dd
robiulawaldev
2025-04-25T14:56:39Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:NousResearch/Yarn-Solar-10b-32k", "base_model:adapter:NousResearch/Yarn-Solar-10b-32k", "region:us" ]
null
2025-04-25T14:55:49Z
--- library_name: peft tags: - generated_from_trainer base_model: NousResearch/Yarn-Solar-10b-32k model-index: - name: robiulawaldev/a02c540f-1aa2-4055-9891-7bea2985c8dd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robiulawaldev/a02c540f-1aa2-4055-9891-7bea2985c8dd This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
ArtemisTAO/lam23
ArtemisTAO
2025-04-25T14:43:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T14:42:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mkowalczyk88/ppo-LunarLander-v2
mkowalczyk88
2025-04-25T14:04:57Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-25T14:04:37Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 228.54 +/- 78.29 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ClemensK/cultural-bert-base-multilingual-cased-classifier
ClemensK
2025-04-25T13:15:57Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-24T23:54:40Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-multilingual-cased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: cultural-bert-base-multilingual-cased-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cultural-bert-base-multilingual-cased-classifier This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9654 - Accuracy: 0.7833 - F1: 0.7807 - Precision: 0.7794 - Recall: 0.7833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.8248 | 1.0 | 196 | 0.8201 | 0.6033 | 0.4855 | 0.4196 | 0.6033 | | 0.5419 | 2.0 | 392 | 0.5876 | 0.75 | 0.7460 | 0.7442 | 0.75 | | 0.4624 | 3.0 | 588 | 0.5846 | 0.7633 | 0.7612 | 0.7693 | 0.7633 | | 0.4212 | 4.0 | 784 | 0.6174 | 0.7733 | 0.7681 | 0.7868 | 0.7733 | | 0.3724 | 5.0 | 980 | 0.6294 | 0.78 | 0.7760 | 0.7764 | 0.78 | | 0.2661 | 6.0 | 1176 | 0.6327 | 0.7867 | 0.7866 | 0.7873 | 0.7867 | | 0.2963 | 7.0 | 1372 | 0.6495 | 0.7933 | 0.7890 | 0.7891 | 0.7933 | | 0.2385 | 8.0 | 1568 | 0.7110 | 0.7633 | 0.7619 | 0.7674 | 0.7633 | | 0.2052 | 9.0 | 1764 | 0.7391 | 0.79 | 0.7872 | 0.7862 | 0.79 | | 0.1342 | 10.0 | 1960 | 0.7779 | 0.78 | 0.7765 | 0.7750 | 0.78 | | 0.155 | 11.0 | 2156 | 0.8565 | 0.7567 | 0.7517 | 0.7553 | 0.7567 | | 0.1236 | 12.0 | 2352 | 0.8135 | 0.79 | 0.7872 | 0.7855 | 0.79 | | 0.1049 | 13.0 | 2548 | 0.8478 | 0.7967 | 0.7934 | 0.7921 | 0.7967 | | 0.0914 | 14.0 | 2744 | 0.9163 | 0.7833 | 0.7817 | 0.7805 | 0.7833 | | 0.145 | 15.0 | 2940 | 0.9301 | 0.7833 | 0.7810 | 0.7797 | 0.7833 | | 0.0864 | 16.0 | 3136 | 0.9492 | 0.78 | 0.7777 | 0.7764 | 0.78 | | 0.0662 | 17.0 | 3332 | 0.9572 | 0.78 | 0.7771 | 0.7762 | 0.78 | | 0.1078 | 18.0 | 3528 | 0.9695 | 0.7833 | 0.7805 | 0.7793 | 0.7833 | | 0.0955 | 19.0 | 3724 | 0.9676 | 0.7833 | 0.7807 | 0.7794 | 0.7833 | | 0.0405 | 20.0 | 3920 | 0.9654 | 0.7833 | 0.7807 | 0.7794 | 0.7833 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0 - Datasets 3.5.0 - Tokenizers 0.21.1
alystronaut/llama_3.2_vision_financial_advisor
alystronaut
2025-04-25T12:19:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mllama", "trl", "image-text-to-text", "conversational", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-04-19T16:06:20Z
--- base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mllama - trl license: apache-2.0 language: - en pipeline_tag: image-text-to-text --- # Uploaded model - **Developed by:** alystronaut - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/gemmaql-i1-GGUF
mradermacher
2025-04-25T12:05:00Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:gauthamk28/gemmaql", "base_model:quantized:gauthamk28/gemmaql", "license:mit", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-04-25T10:00:15Z
--- base_model: gauthamk28/gemmaql language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/gauthamk28/gemmaql <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/gemmaql-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.2 | very low quality | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q4_1.gguf) | i1-Q4_1 | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/gemmaql-i1-GGUF/resolve/main/gemmaql.i1-Q6_K.gguf) | i1-Q6_K | 2.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
masani/SFT_math_Llama-2-7b-hf_epoch_7_global_step_203
masani
2025-04-25T11:19:43Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T11:14:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm3_gen6_run0_WXS_doc1000_synt64_tot128_FRESH
dgambettaphd
2025-04-25T11:15:06Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T11:14:44Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm3_gen10_run0_WXS_doc1000_synt64_tot128_SYNLAST
dgambettaphd
2025-04-25T11:01:06Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T11:00:56Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kritikabasu89/Aarohi
kritikabasu89
2025-04-25T10:58:28Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-04-25T10:58:28Z
--- license: artistic-2.0 ---
dgambettaphd/M_llm3_gen2_run0_WXS_doc1000_synt64_tot128_SYNLAST
dgambettaphd
2025-04-25T10:49:34Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T10:49:23Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jjeccles/SJHotpotfilter0425R4-chatonly
jjeccles
2025-04-25T09:16:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:16:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
osmr/stable-diffusion-v1-5-lora-animegirls
osmr
2025-04-25T08:06:52Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:mit", "region:us" ]
text-to-image
2025-04-25T08:06:03Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- animegirl, Sayaka Miki from Puella Magi Madoka Magica, purple hair, green eyes, crying, with necklace, transparent background, hand-drawn parameters: negative_prompt: low quality, blurry, distorted, extra limbs output: url: images/generated_image1.png base_model: runwayml/stable-diffusion-v1-5 instance_prompt: animegirl license: mit --- # stable-diffusion-v1-5-lora-animegirls <Gallery /> ## Model description diffusers&#x2F;train_text_to_image_lora.py --pretrained_model_name_or_path&#x3D;&quot;runwayml&#x2F;stable-diffusion-v1-5&quot; --dataset_name&#x3D;&quot;osmr&#x2F;animegirls&quot; --caption_column&#x3D;&quot;prompt&quot; --resolution&#x3D;512 --train_batch_size&#x3D;1 --num_train_epochs&#x3D;100 --learning_rate&#x3D;1e-4 --lr_scheduler&#x3D;cosine --lr_warmup_steps&#x3D;1 --rank&#x3D;16 --snr_gamma&#x3D;5.0 --random_flip --validation_prompt&#x3D;&quot;animegirl chibi with green curly hair and blue eyes, standing, happy, wearing magical dress, on transparent background&quot; ## Trigger words You should use `animegirl` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/osmr/stable-diffusion-v1-5-lora-animegirls/tree/main) them in the Files & versions tab.
nerosena/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_roaring_bison
nerosena
2025-04-25T00:14:14Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am rapid roaring bison", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-20T09:44:22Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_roaring_bison tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am rapid roaring bison - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_roaring_bison This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nerosena/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rapid_roaring_bison", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
minoHealthAIlabs/llama-3.1-8b-finetune-tools
minoHealthAIlabs
2025-04-24T22:58:40Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-04-24T22:55:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jinkies/ppo-LunarLander-v2
jinkies
2025-04-24T22:07:45Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-04-24T22:07:24Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.87 +/- 14.79 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jaredmerlo/jared-adjusted
jaredmerlo
2025-04-24T21:41:16Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:11", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-04-24T21:40:46Z
--- language: - en license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:11 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: BAAI/bge-base-en-v1.5 widget: - source_sentence: 'cy by precomputing and storing the likelihood of query terms, ranking documents based on their sum. 2.3 Retriever and Generator Fine-tuning Fine-tuning within the RAG framework is crucial for optimizing both retrievers and generators. Some research focuses on fine-tuning the generator to better utilize retriever context [30–32], ensuring faithful and robust generated content. Others fine-tune the retriever to learn to retrieve beneficial passages for the generator [33–35]. Holistic approaches treat RAG as an integrated system, fine-tuning both retriever and generator together to enhance overall performance [36–38], despite increased complexity and integration challenges. Several surveys have extensively discussed current RAG systems, covering aspects like text genera- tion [7, 8], integration with LLMs [6, 39], multimodal [40], and AI-generated content [41]. While these surveys provide comprehensive overviews of existing RAG methodologies, selecting the appro- 3 Which city will the nex' sentences: - ' Small-sized blocks are used to match queries, and larger blocks that include contextual information are returned . We use the LLM-Embedder [20] model as an embedding model to demonstrate the effectiveness of advanced chunking techniques . Techniques like small-to-big and sliding window improve quality by maintaining context .' - ' Chunking balances preserving text semantics with simplicity and efficiency . Larger chunks provide more context, enhancing the process time but increasing process time . Smaller chunks improve retrieval recall and reduce time but may lack sufficient context . Faithfulness measures whether the response is hallucinated or matches the retrieved texts .' - ' Some research focuses on fine-tuning the generator to better utilize retriever context [30–32] Others fine-tune the retriever to learn to retrieve beneficial passages for the generator [33–35] Holistic approaches treat RAG as an integrated system .' - source_sentence: "d I \nwant to choose the cheapest mode of transportation, \nshould\ \ I drive or take a plane? < Decision making >\nI had a quarrel with\ \ my parents because they oppose my \nrelationship with my boyfriend, but we genuinely\ \ love \neach other. How should I persuade my parents to accept \nour relationship?\ \ \n \n < Suggestion >\nFigure 2: Classification of retrieval requirements\ \ for different tasks. In cases where information is\nnot provided, we differentiate\ \ tasks based on the functions of the model.\npriate algorithm for practical implementation\ \ remains challenging. In this paper, we focus on best\npractices for applying\ \ RAG methods, advancing the understanding and application of RAG in LLMs.\n3\n\ RAG Workflow\nIn this section, we detail the components of the RAG workflow. For\ \ each module, we review\ncommonly used approaches and select the default and\ \ alternative methods for our final pipeline.\nSection 4 will discuss best practices.\ \ Figure 1 presents the workflow and methods for each " sentences: - ' Chunking documents into smaller segments is crucial for enhancing retrieval precision . This process can be applied at various levels of granularity, such as token, and semantic levels . Table 2: Results for different embedding models on namespace-Pt/msmarco. ge-large-en [12]' - ' Not all queries require retrieval-augmented due to the inherent capabilities of LLMs . Retrieval is generally recommended when knowledge beyond the model’s parameters is needed . For instance, an LLM trained up to 2023 can handle a translation request for “Sora was developed by OpenAI” without retrieval .' - ' The RAG algorithm for practical implementation remains challenging . In this paper, we focus on best practices for applying RAG methods . Figure 1 presents the workflow and methods for each task . Figure 2: Classification of retrieval requirements for different tasks . In cases where information is not provided, we differentiate tasks based on the functions of the model .' - source_sentence: 'ategorize Model Metrics Acc Prec Rec F1 BERT-base-multilingual 0.95 0.96 0.94 0.95 Table 1: Results of the Query Classifier. 15 tasks based on whether they provide suffi- cient information, with specific tasks and exam- ples illustrated in Figure 2. For tasks entirely based on user-given information, we denote as “sufficient”, which need not retrieval; otherwise, we denote as “insufficient”, and retrieval may be necessary. We train a classifier to automate this decision-making process. Experimental de- tails are presented in Appendix A.1. Section 4 explores the impact of query classification on the workflow, comparing scenarios with and without classification. 4 Embedding Model namespace-Pt/msmarco MRR@1 MRR@10 MRR@100 R@1 R@10 R@100 BAAI/LLM-Embedder [20] 24.79 37.58 38.62 24.07 66.45 90.75 BAAI/bge-base-en-v1.5 [12] 23.34 35.80 36.94 22.63 64.12 90.13 BAAI/bge-small-en-v1.5 [12] 23.27 35.78 36.89 22.65 63.92 89.80 BAAI/bge-large-en-v1.5 [12] 24.63 37.48 38.59 23.91 65.57 90.60 BAAI/b' sentences: - ' 15 tasks based on whether they provide suffi-cient information, with specific tasks and exam-type exam-ples illustrated in Figure 2 . For tasks entirely.given information, we denote as “sufficient” and “insufficient”, which need not retrieval . We train a classifier to automate the decision-making process .' - ' An open source embedding model is three times smaller than that of BAAI/bge-large-en [12] The size of the two databases is comparable to that of the latter . We select an appropriate vector database for our research based on several key criptions .' - ' Washington played a crucial role in the American                Revolutionary War, leading the                Continental Army against the British. "Please continue writing the above paragraph . Write an article about the geography of Europe, focusing . on the changes in rainfall in the western part of the . western part . of the southeastern Europe. If you''re currently a computer science student and your computer system encounters a malfunction, what should . you do?' pipeline_tag: sentence-similarity library_name: sentence-transformers --- # BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("jaredmerlo/jared-adjusted") # Run inference sentences = [ 'ategorize\nModel\nMetrics\nAcc Prec Rec\nF1\nBERT-base-multilingual 0.95 0.96 0.94 0.95\nTable 1: Results of the Query Classifier.\n15 tasks based on whether they provide suffi-\ncient information, with specific tasks and exam-\nples illustrated in Figure 2. For tasks entirely\nbased on user-given information, we denote as\n“sufficient”, which need not retrieval; otherwise,\nwe denote as “insufficient”, and retrieval may\nbe necessary. We train a classifier to automate\nthis decision-making process. Experimental de-\ntails are presented in Appendix A.1. Section 4\nexplores the impact of query classification on the workflow, comparing scenarios with and without\nclassification.\n4\nEmbedding Model\nnamespace-Pt/msmarco\nMRR@1 MRR@10 MRR@100\nR@1\nR@10 R@100\nBAAI/LLM-Embedder [20]\n24.79\n37.58\n38.62\n24.07\n66.45\n90.75\nBAAI/bge-base-en-v1.5 [12]\n23.34\n35.80\n36.94\n22.63\n64.12\n90.13\nBAAI/bge-small-en-v1.5 [12]\n23.27\n35.78\n36.89\n22.65\n63.92\n89.80\nBAAI/bge-large-en-v1.5 [12]\n24.63\n37.48\n38.59\n23.91\n65.57\n90.60\nBAAI/b', ' 15 tasks based on whether they provide suffi-cient information, with specific tasks and exam-type exam-ples illustrated in Figure 2 . For tasks entirely.given information, we denote as “sufficient” and “insufficient”, which need not retrieval . We train a classifier to automate the decision-making process .', ' An open source embedding model is three times smaller than that of BAAI/bge-large-en [12] The size of the two databases is comparable to that of the latter . We select an appropriate vector database for our research based on several key criptions .', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 11 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 11 samples: | | positive | anchor | |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 161 tokens</li><li>mean: 240.0 tokens</li><li>max: 400 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 65.73 tokens</li><li>max: 85 tokens</li></ul> | * Samples: | positive | anchor | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>cy by precomputing and storing the likelihood<br>of query terms, ranking documents based on their sum.<br>2.3<br>Retriever and Generator Fine-tuning<br>Fine-tuning within the RAG framework is crucial for optimizing both retrievers and generators. Some<br>research focuses on fine-tuning the generator to better utilize retriever context [30–32], ensuring<br>faithful and robust generated content. Others fine-tune the retriever to learn to retrieve beneficial<br>passages for the generator [33–35]. Holistic approaches treat RAG as an integrated system, fine-tuning<br>both retriever and generator together to enhance overall performance [36–38], despite increased<br>complexity and integration challenges.<br>Several surveys have extensively discussed current RAG systems, covering aspects like text genera-<br>tion [7, 8], integration with LLMs [6, 39], multimodal [40], and AI-generated content [41]. While<br>these surveys provide comprehensive overviews of existing RAG methodologies, selecting the appro-<br>3<br>Which city will the nex</code> | <code> Some research focuses on fine-tuning the generator to better utilize retriever context [30–32] Others fine-tune the retriever to learn to retrieve beneficial passages for the generator [33–35] Holistic approaches treat RAG as an integrated system .</code> | | <code>t World Cup be held? <br> <br> <br> < Search ><br>"French.Washington played a <br>crucial role in the American <br>Revolutionary War, leading the <br>Continental Army against the <br>British. "<br>Please continue writing the <br>above paragraph. <br> < Continuation writing ><br>Background Knowledge<br>"To be, or not to be, that is the <br>question." <br>Please translate this sentence into <br>French. <br> <br>< Translation ><br>Insufficient information<br>Sufficient information<br>Please give me a plan for holding a graduation party. <br> <br> <br> < Planning ><br>If you're currently a computer science student and your <br>computer system encounters a malfunction, what should <br>you do? <br> <br> < Role-play ><br>Write an article about the geography of Europe, focusing <br>on the changes in rainfall in the western part of the <br>country. <br> < Writing ><br>No Retrieval Needed<br>Need to Retrieval<br>Please find a novel that is as <br>famou</code> | <code> Washington played a crucial role in the American                Revolutionary War, leading the                Continental Army against the British. "Please continue writing the above paragraph . Write an article about the geography of Europe, focusing . on the changes in rainfall in the western part of the . western part . of the southeastern Europe. If you're currently a computer science student and your computer system encounters a malfunction, what should . you do?</code> | | <code>s as "One Hundred Years <br>of Solitude". < Search ><br>"Dave is attending his aunt's <br>brother funeral today."<br>Paraphrase the given information <br>effectively. <br> < Rewriting ><br>"The Renaissance was a <br>cultural transformation in <br>European history, marking the <br>revival of arts, sciences, and <br>humanistic thought. The <br>fervor of artists and scholars <br>propelled prosperity and <br>innovation in arts, literature, <br>and science." Give me a <br>summary.<br> < Summarization ><br>Identify who is football players: <br>Messi, Jordan, Kobe. <br> <br> < Closed QA ><br>Tom has three sisters, and each <br>sister has a brother. How many <br>siblings are there in total? <br> <br>< Reasonning ><br>Q: 3,1 A: 3 Q: 2,5 A: 5 <br>Q: 5,7 A: ?<br> < In-context learning > <br>"ChatGPT is a product of <br>OpenAI." <br>Please provide the ownership <br>relationship. <br> < Information extraction ><br>No Background Knowledge<br>If I want to travel from Los Angeles to New York an</code> | <code> "ChatGPT" is a product of "OpenAI" and is based on the open-source knowledge of ChatGPT. s as "One Hundred Years of Solitude" The Renaissance was a cultural transformation in  European history, marking the  revival of arts, sciences, and  humanistic thought. Give me a summary of the Renaissance .</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 8 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `tf32`: False - `optim`: adamw_torch_fused - `gradient_checkpointing`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 8 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: True - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 3.1.1 - Transformers: 4.40.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Marco0/zabba
Marco0
2025-04-24T19:53:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T19:52:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hartunka/bert_base_rand_100_v2_mnli
Hartunka
2025-04-24T19:42:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/bert_base_rand_100_v2", "base_model:finetune:Hartunka/bert_base_rand_100_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-24T18:05:03Z
--- library_name: transformers language: - en base_model: Hartunka/bert_base_rand_100_v2 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert_base_rand_100_v2_mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.6743287225386493 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_base_rand_100_v2_mnli This model is a fine-tuned version of [Hartunka/bert_base_rand_100_v2](https://huggingface.co/Hartunka/bert_base_rand_100_v2) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.7597 - Accuracy: 0.6743 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9712 | 1.0 | 1534 | 0.8988 | 0.5824 | | 0.861 | 2.0 | 3068 | 0.8350 | 0.6276 | | 0.769 | 3.0 | 4602 | 0.7970 | 0.6504 | | 0.6896 | 4.0 | 6136 | 0.7633 | 0.6661 | | 0.6191 | 5.0 | 7670 | 0.7852 | 0.6735 | | 0.5467 | 6.0 | 9204 | 0.8340 | 0.6729 | | 0.4721 | 7.0 | 10738 | 0.8675 | 0.6770 | | 0.4013 | 8.0 | 12272 | 0.9629 | 0.6663 | | 0.3355 | 9.0 | 13806 | 1.0930 | 0.6595 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
biaofu-xmu/EAST-8B
biaofu-xmu
2025-04-24T04:36:25Z
2
0
null
[ "safetensors", "llama", "en", "zh", "de", "ru", "cs", "dataset:biaofu-xmu/SiMT-Multi-90K", "dataset:biaofu-xmu/SiMT-De-En-660K", "arxiv:2504.09570", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-04-22T19:02:00Z
--- license: apache-2.0 language: - en - zh - de - ru - cs base_model: - meta-llama/Meta-Llama-3-8B-Instruct datasets: - biaofu-xmu/SiMT-Multi-90K - biaofu-xmu/SiMT-De-En-660K --- Checkpoint for EAST ([paper](https://arxiv.org/abs/2504.09570) and [code](https://github.com/biaofuxmu/EAST)).
Vuphi/dvcdad
Vuphi
2025-04-24T03:21:20Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-04-24T03:21:20Z
--- license: bigcode-openrail-m ---
fedovtt/c3afc794-981e-4b14-bd61-78984ef6a6be
fedovtt
2025-04-23T20:49:47Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Math-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Math-7B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-04-23T20:20:18Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Math-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: c3afc794-981e-4b14-bd61-78984ef6a6be results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-Math-7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 3ebbecb42d3d8280_train_data.json ds_type: json format: custom path: /workspace/input_data/3ebbecb42d3d8280_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: fedovtt/c3afc794-981e-4b14-bd61-78984ef6a6be hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/3ebbecb42d3d8280_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 826a40bc-b2e2-4b6c-8cba-69bf89b18ce1 wandb_project: s56-1 wandb_run: your_name wandb_runid: 826a40bc-b2e2-4b6c-8cba-69bf89b18ce1 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # c3afc794-981e-4b14-bd61-78984ef6a6be This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.4482 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.8633 | 0.0117 | 200 | 6.4482 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kokovova/6be51de5-7b38-49b5-9a40-fe63e5a95377
kokovova
2025-04-23T16:23:14Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:upstage/SOLAR-10.7B-Instruct-v1.0", "base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0", "license:cc-by-nc-4.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-23T16:09:20Z
--- library_name: peft license: cc-by-nc-4.0 base_model: upstage/SOLAR-10.7B-Instruct-v1.0 tags: - axolotl - generated_from_trainer model-index: - name: 6be51de5-7b38-49b5-9a40-fe63e5a95377 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: upstage/SOLAR-10.7B-Instruct-v1.0 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 89aa575b7449869b_train_data.json ds_type: json format: custom path: /workspace/input_data/89aa575b7449869b_train_data.json type: field_instruction: ja field_output: en format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/6be51de5-7b38-49b5-9a40-fe63e5a95377 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/89aa575b7449869b_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 19b390e1-4090-4adb-a946-7e35167b74db wandb_project: s56-4 wandb_run: your_name wandb_runid: 19b390e1-4090-4adb-a946-7e35167b74db warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 6be51de5-7b38-49b5-9a40-fe63e5a95377 This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5401 | 0.0115 | 200 | 0.7870 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/Yukselis-GGUF
mradermacher
2025-04-23T16:13:21Z
0
0
transformers
[ "transformers", "gguf", "matrixportal", "tr", "en", "base_model:matrixportal/Yukselis", "base_model:quantized:matrixportal/Yukselis", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-23T15:54:44Z
--- base_model: matrixportal/Yukselis language: - tr - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - matrixportal --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/matrixportal/Yukselis <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Yukselis-GGUF/resolve/main/Yukselis.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Hartunka/tiny_bert_km_20_v2_cola
Hartunka
2025-04-21T23:31:18Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:Hartunka/tiny_bert_km_20_v2", "base_model:finetune:Hartunka/tiny_bert_km_20_v2", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-21T23:30:12Z
--- library_name: transformers language: - en base_model: Hartunka/tiny_bert_km_20_v2 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation - accuracy model-index: - name: tiny_bert_km_20_v2_cola results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 - name: Accuracy type: accuracy value: 0.6912751793861389 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny_bert_km_20_v2_cola This model is a fine-tuned version of [Hartunka/tiny_bert_km_20_v2](https://huggingface.co/Hartunka/tiny_bert_km_20_v2) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6190 - Matthews Correlation: 0.0 - Accuracy: 0.6913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:| | 0.6209 | 1.0 | 34 | 0.6208 | 0.0 | 0.6913 | | 0.6054 | 2.0 | 68 | 0.6204 | 0.0 | 0.6913 | | 0.5937 | 3.0 | 102 | 0.6190 | 0.0 | 0.6913 | | 0.5732 | 4.0 | 136 | 0.6477 | 0.0284 | 0.6424 | | 0.5393 | 5.0 | 170 | 0.6415 | 0.0604 | 0.6731 | | 0.4907 | 6.0 | 204 | 0.6778 | 0.0740 | 0.6644 | | 0.4491 | 7.0 | 238 | 0.7494 | 0.0665 | 0.6491 | | 0.4125 | 8.0 | 272 | 0.7906 | 0.0991 | 0.6088 | ### Framework versions - Transformers 4.50.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.21.1
falavi/SpeechLMM_v1.1_L_ASR
falavi
2025-04-19T17:17:11Z
0
0
null
[ "safetensors", "speechlmm", "license:other", "region:us" ]
null
2025-04-19T15:22:15Z
--- license: other license_name: license license_link: https://huggingface.co/meetween/Llama-speechlmm-1.0-l/blob/main/LICENSE ---