modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-01 18:27:11
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
461 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-01 18:25:15
card
stringlengths
11
1.01M
brittlewis12/DeepSeek-R1-0528-Qwen3-8B-GGUF
brittlewis12
2025-05-30T00:31:10Z
0
0
null
[ "gguf", "reasoning", "deepseek", "qwen3", "text-generation", "en", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-29T14:30:35Z
--- base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B pipeline_tag: text-generation inference: true language: - en license: mit model_creator: deepseek-ai model_name: DeepSeek-R1-0528-Qwen3-8B model_type: qwen3 quantized_by: brittlewis12 tags: - reasoning - deepseek - qwen3 --- # DeepSeek R1 0528 Qwen3 8B GGUF **Original model**: [DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) **Model creator**: [DeepSeek AI](https://huggingface.co/deepseek-ai) > We distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. We believe that the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for both academic research on reasoning models and industrial development focused on small-scale models. This repo contains GGUF format model files for DeepSeek AI's _DeepSeek R1 0528 Qwen3 8B_. ### What is GGUF? GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. Converted with llama.cpp build b5536 (revision [2b13162](https://github.com/ggml-org/llama.cpp/commits/2b131621e60d8ec2cc961201beb6773ab37b6b69)), using [autogguf-rs](https://github.com/brittlewis12/autogguf-rs). ### Prompt template: [DeepSeek R1](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B/blob/main/tokenizer_config.json#L34) ``` {{system_message}} <|User|>{{prompt}}<|Assistant|> ``` ### Notes from DeepSeek on Running Locally > Compared to previous versions of DeepSeek-R1, the usage recommendations for DeepSeek-R1-0528 have the following changes: > > - System prompt is supported now. > - It is not required to add `<think>\n` at the beginning of the output to force the model into thinking pattern. > > The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. --- ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac! ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg) [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: - create & save **Characters** with custom system prompts & temperature settings - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! * or, use an API key with the chat completions-compatible model provider of your choice -- ChatGPT, Claude, Gemini, DeepSeek, & more! - make it your own with custom **Theme colors** - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggml-org/llama.cpp), with **haptics** during response streaming! - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! * if you **already have the app**, download DeepSeek R1 0528 Qwen3 8B now! * <cnvrsai:///models/search/hf?id=brittlewis12/DeepSeek-R1-0528-Qwen3-8B-GGUF> - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date --- ## Original Model Evaluation > We distilled the chain-of-thought from DeepSeek-R1-0528 to post-train Qwen3 8B Base, obtaining DeepSeek-R1-0528-Qwen3-8B. This model achieves state-of-the-art (SOTA) performance among open-source models on the AIME 2024, surpassing Qwen3 8B by +10.0% and matching the performance of Qwen3-235B-thinking. | | AIME 24 | AIME 25 | HMMT Feb 25 | GPQA Diamond | LiveCodeBench (2408-2505) | |--------------------------------|---------|---------|-------------|--------------|---------------------------| | Qwen3-235B-A22B | 85.7 | 81.5 | 62.5 | 71.1 | 66.5 | | Qwen3-32B | 81.4 | 72.9 | - | 68.4 | - | | Qwen3-8B | 76.0 | 67.3 | - | 62.0 | - | | Phi-4-Reasoning-Plus-14B | 81.3 | 78.0 | 53.6 | 69.3 | - | | Gemini-2.5-Flash-Thinking-0520 | 82.3 | 72.0 | 64.2 | 82.8 | 62.3 | | o3-mini (medium) | 79.6 | 76.7 | 53.3 | 76.8 | 65.9 | | **DeepSeek-R1-0528-Qwen3-8B** | **86.0** | **76.3** | **61.5** | **61.1** | **60.5** | --- ## DeepSeek R1 0528 Qwen3 8B in cnvrs on iOS ![deepseek-r1-qwen3-8b in cnvrs pt1](https://cdn-uploads.huggingface.co/production/uploads/63b64d7a889aa6707f155cdb/nsXnOaK6Sb-0PGvdY8ayy.png) ![deepseek-r1-qwen3-8b in cnvrs pt2](https://cdn-uploads.huggingface.co/production/uploads/63b64d7a889aa6707f155cdb/4AnhMFL41EuIwhKuVaCGi.png) ---
reinattwijaya/MNLP_M2_quantized_model
reinattwijaya
2025-05-30T00:24:42Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "gptq", "region:us" ]
text-generation
2025-05-29T16:53:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ThomasTheMaker/chat
ThomasTheMaker
2025-05-30T00:22:23Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-30T00:19:49Z
--- license: apache-2.0 ---
aquiffoo/aquif-2.5-GGUF
aquiffoo
2025-05-30T00:21:21Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-30T00:21:21Z
--- license: apache-2.0 ---
Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_M-GGUF
Triangle104
2025-05-30T00:20:02Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-30B-A3B-abliterated", "base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-30T00:18:41Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Qwen3-30B-A3B-abliterated tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_M-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_k_m.gguf -c 2048 ```
BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb9zi7g00ki61b1ylbeklbv2
BootesVoid
2025-05-30T00:17:57Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-30T00:17:56Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: laura --- # Cmb8Gn0Jk0Mfblexp52Lf6W5B_Cmb9Zi7G00Ki61B1Ylbeklbv2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `laura` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "laura", "lora_weights": "https://huggingface.co/BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb9zi7g00ki61b1ylbeklbv2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb9zi7g00ki61b1ylbeklbv2', weight_name='lora.safetensors') image = pipeline('laura').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmb8gn0jk0mfblexp52lf6w5b_cmb9zi7g00ki61b1ylbeklbv2/discussions) to add images that show off what you’ve made with this LoRA.
exala/db_slr_7.1
exala
2025-05-30T00:16:56Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-30T00:16:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yashaorg/sigspace
yashaorg
2025-05-30T00:15:33Z
0
0
null
[ "tahoe-deepdive", "hackathon", "tahoe-100M", "dataset:tahoebio/Tahoe-100M", "license:mit", "region:us" ]
null
2025-05-11T20:36:52Z
--- license: mit datasets: - tahoebio/Tahoe-100M tags: - tahoe-deepdive - hackathon - tahoe-100M --- <div align="center"> <img src="img/SigSpace.png" alt="SigSpace Logo" width="400"/> </div> # SigSpace: An AI Agent for the Tahoe-100M dataset This is a submission for the **Tahoe-DeepDive Hackathon 2025**. # Team Name SigSpace ## Members - Ishita Mangla - [email protected] - Giovanni Palla - Chan Zuckerberg Initiative - [email protected] - Rohit Khurana - Stanford - [email protected] - Siddhant Sanghi - UC Davis - [email protected] - Kuan Pang - Stanford - [email protected] - Yanay Rosen - Stanford - [email protected] - Yasha Ektefaie - Harvard - [email protected] # Project ## SigSpace: An AI Agent for the Tahoe-100M dataset ## Overview We have developed an AI agent that accesses the Tahoe-100M dataset along with publicly available and novel datasets. This agent works to refine and expand the mechanisms of action (MOA) and drug signatures of the perturbations within the Tahoe-100M dataset. ## Motivation Drug discovery in the age of Large Language Models (LLMs) can be enhanced through agentic workflows that parse diverse sources of unstructured information to synthesize and connect hypotheses across different fields and modalities. However, these models are primarily trained on text data and lack the capacity to effectively interrogate rich biological databases with complex, biologically-motivated queries. In this work, we provide a proof of concept demonstrating how the Tahoe-100M dataset can be integrated with publicly available relevant datasets to expand the hypothesis space for mechanisms of action and drug responses in the perturbations tested in the Tahoe-100M dataset. ## Methods We have curated new datasets that enhance the description of drugs and cell-lines present in the Tahoe-100M dataset. Specifically: - TAHOE-100M: vision scores and metadata. - PRISM: We use PRISM drug sensitivity data, which reports the concentration of a compound needed to inhibit 50% of cancer cell viability. Measurements are based on pooled screening of barcoded cell lines and provide a high-throughput assessment of drug response across a large panel of cancer models. - NCI60: We use NCI-60 LC50 data, which reports the concentration of a drug that kills 50% of the cells present at the time of drug addition. It is measured across a panel of 60 human cancer cell lines using standardized multi-dose assays. - JUMP: We use the JUMP dataset, which captures morphological profiles of cells in response to chemical and genetic perturbations. High-content imaging and automated feature extraction are used to quantify cellular changes, enabling large-scale profiling of perturbation effects across diverse biological contexts. - UCE-CXG-EMBEDDING: natural perturbation search using AI virtual cell. ## Data The following datasets are used in our project: - **drug_metadata_inchikey.csv**: Drug metadata from Tahoe-100M including InChIKey identifiers for chemical structure representation. - **compound_genetic_perturbation_cosine_similarity_inchikey.csv**: Cosine similarity scores between compound and genetic perturbations in Jump dataset. - **Tahoe_PRISM_cell_by_drug_ic50_matrix_named.csv**: IC50 values showing drug sensitivity across cell lines. - **filtered_results.csv**: Filtered NCI60 LC50 data for drug response analysis. - **cell_line_metadata.csv**: Comprehensive metadata for cell lines in the Tahoe dataset. - **drug_metadata.csv**: Detailed information about drugs in the Tahoe dataset. - **tahoe_vision_scores.h5ad**: Vision scores in AnnData format capturing cellular morphological changes. - **Tahoe_PRISM_matched_cell_metadata_final.csv**: Cell metadata for PRISM-Tahoe matched cell lines. - **Tahoe_PRISM_matched_drug_metadata_final.csv**: Drug metadata for PRISM-Tahoe matched compounds. - **in_tahoe_search_result_df.csv**: Search results for perturbations within the Tahoe dataset embedded with UCE. - **cxg_search_result_df.csv**: Cross-dataset search results using CXG embeddings with UCE. ## Results We have developed a Gradio application that accesses these databases and performs complex queries, enhancing and grounding the reasoning in real biological measurements. ## Discussion We deployed SigSpace on a few queries and found it was able to integrate insights from across these datasets, generating novel hypotheses abotu the MOA of drugs of interest.
BootesVoid/cmba0dx760krz1b1y8k7j51an_cmba0gqfo0ksl1b1yqhubhv2g
BootesVoid
2025-05-30T00:09:44Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-30T00:09:43Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: chloe_cloud --- # Cmba0Dx760Krz1B1Y8K7J51An_Cmba0Gqfo0Ksl1B1Yqhubhv2G <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `chloe_cloud` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "chloe_cloud", "lora_weights": "https://huggingface.co/BootesVoid/cmba0dx760krz1b1y8k7j51an_cmba0gqfo0ksl1b1yqhubhv2g/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmba0dx760krz1b1y8k7j51an_cmba0gqfo0ksl1b1yqhubhv2g', weight_name='lora.safetensors') image = pipeline('chloe_cloud').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmba0dx760krz1b1y8k7j51an_cmba0gqfo0ksl1b1yqhubhv2g/discussions) to add images that show off what you’ve made with this LoRA.
ZiartisNikolas/NMT-cypriot-dialect-to-greek
ZiartisNikolas
2025-05-30T00:09:22Z
4
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "nmt", "cypriot-greek", "greek", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2025-05-26T18:24:08Z
--- tags: - translation - nmt - cypriot-greek - greek library_name: transformers languages: - cy - el license: cc-by-4.0 --- ## Model Details - **Developed by**: Nikolas Ziartis - **Institute**: University of Cyprus - **Model type**: MarianMT (Transformer-based Seq2Seq) - **Source language**: Cypriot Greek (ISO 639-1: cy) - **Target language**: Modern Standard Greek (ISO 639-1: el) - **Fine-tuned from**: `Helsinki-NLP/opus-mt-en-grk` - **License**: CC BY 4.0 ## Model Description This model is a MarianMT transformer, fine-tuned via active learning to translate from the low-resource Cypriot Greek dialect into Modern Standard Greek. In nine iterative batches, we: 1. **Extracted high-dimensional embeddings** for every unlabeled Cypriot sentence using the Greek LLM `ilsp/Meltemi-7B-Instruct-v1.5` :contentReference[oaicite:0]{index=0}. 2. **Applied k-means clustering** to select the 50 “most informative” sentence pairs per batch. 3. **Had human annotators** translate those 50 sentences into Standard Greek. 4. **Fine-tuned** the MarianMT model on the accumulating parallel corpus, freezing and unfreezing layers to preserve learned representations. The result is a system that accurately captures colloquial Cypriot expressions while producing fluent Modern Greek. ## Usage ```python from transformers import MarianMTModel, MarianTokenizer model_name = "ZiartisNikolas/NMT-cypriot-dialect-to-greek" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) src = ["Τζ̆αι φυσικά ήξερα ίνταμπου εγινίσκετουν."] # Cypriot Greek sentence batch = tokenizer(src, return_tensors="pt", padding=True) gen = model.generate(**batch) print(tokenizer.batch_decode(gen, skip_special_tokens=True))
Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_S-GGUF
Triangle104
2025-05-30T00:06:14Z
0
0
transformers
[ "transformers", "gguf", "chat", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:huihui-ai/Qwen3-30B-A3B-abliterated", "base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-30T00:04:54Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation base_model: huihui-ai/Qwen3-30B-A3B-abliterated tags: - chat - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_prompt: '**Usage Warnings** “**Risk of Sensitive or Controversial Outputs**“: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. “**Not Suitable for All Audiences**:“ Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security. “**Legal and Ethical Responsibilities**“: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. “**Research and Experimental Use**“: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. “**Monitoring and Review Recommendations**“: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. “**No Default Safety Guarantees**“: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.' --- # Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/Qwen3-30B-A3B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_S-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_S-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_S-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-abliterated-Q4_K_S-GGUF --hf-file qwen3-30b-a3b-abliterated-q4_k_s.gguf -c 2048 ```
AmberYifan/Llama-2-13b-sft-all-pool
AmberYifan
2025-05-30T00:03:41Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T23:32:35Z
--- base_model: AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-2-13b-sft-all-pool tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Llama-2-13b-sft-all-pool This model is a fine-tuned version of [AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-2-13b-sft-all-pool", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/kxniq6kz) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF
mradermacher
2025-05-30T00:00:09Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "dpo", "en", "base_model:AmberYifan/Qwen2.5-14B-sft-SPIN-gpt4o", "base_model:quantized:AmberYifan/Qwen2.5-14B-sft-SPIN-gpt4o", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T23:23:04Z
--- base_model: AmberYifan/Qwen2.5-14B-sft-SPIN-gpt4o language: - en library_name: transformers model_name: Qwen2.5-14B-sft-SPIN-gpt4o quantized_by: mradermacher tags: - generated_from_trainer - trl - dpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AmberYifan/Qwen2.5-14B-sft-SPIN-gpt4o <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-gpt4o-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-gpt4o.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
sanali209/reitBF
sanali209
2025-05-29T23:58:35Z
25
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-03T10:25:11Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: sanali209/reitBF results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.7810394763946533 --- # sanali209/reitBF Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images
Hsianchengfun/merged_model_WOQ_epoch1361
Hsianchengfun
2025-05-29T23:58:00Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T23:55:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/base_2d_first_quadrant_red_no_preamble_20250529_234555
gradientrouting-spar
2025-05-29T23:56:50Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T23:54:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Petricaa19/malimg-cnn-classifiers
Petricaa19
2025-05-29T23:55:39Z
0
0
keras
[ "keras", "region:us" ]
null
2025-05-29T23:55:35Z
--- library_name: keras --- This model has been uploaded using the Keras library and can be used with JAX, TensorFlow, and PyTorch backends. This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information. For more details about the model architecture, check out [config.json](./config.json).A plot of the model can be found [here](./assets/summary_plot.png).
benetraco/brain_ddpm_256
benetraco
2025-05-29T23:47:00Z
26
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "medical-imaging", "brain-mri", "multiple-sclerosis", "arxiv:2006.11239", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2025-05-08T12:48:42Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class - medical-imaging - brain-mri - multiple-sclerosis --- # Brain MRI Synthesis with DDPM This model is a diffusion-based model for unconditional image generation of **brain MRI FLAIR slices** of size **256x256 pixels**. The model was trained using the [DDPM](https://arxiv.org/abs/2006.11239) architecture, with attention mechanisms in the middle of the U-Net. It is trained from scratch on a dataset of brain MRI slices, specifically designed for generating synthetic brain images. ## Training Details - **Architecture:** DDPM (Denoising Diffusion Probabilistic Model) - **Resolution:** 256x256 pixels - **Dataset:** Lesion2D VH splitted (FLAIR MRI slices) (70% of the dataset) - **Channels:** 1 (grayscale, FLAIR modality) - **Epochs:** 50 - **Batch size:** 4 - **Optimizer:** AdamW with learning rate of `1.0e-4` - **Scheduler:** Cosine with 500 warm-up steps - **Gradient Accumulation:** 8 steps - **Mixed Precision:** No - **Hardware:** Trained on **one NVIDIA GeForce GTX 1080 Ti GPU of 12GB** - **Memory Consumption:** Around **11 GB** during training ## U-Net Architecture - **Down Blocks:** [DownBlock2D, DownBlock2D, DownBlock2D, DownBlock2D, AttnDownBlock2D, DownBlock2D] - **Up Blocks:** [UpBlock2D, AttnUpBlock2D, UpBlock2D, UpBlock2D, UpBlock2D, UpBlock2D] - **Layers per Block:** 2 - **Block Channels:** [128, 128, 256, 256, 512, 512] ## Usage You can use the model directly with the `diffusers` library: ```python from diffusers import DDPMPipeline import torch # Load the model pipeline = DDPMPipeline.from_pretrained("benetraco/brain_ddpm_256") pipeline.to("cuda") # or "cpu" # Generate an image image = pipeline(batch_size=1).images[0] # Display the image image.show()
p0ntus/oCoder-T1-4B
p0ntus
2025-05-29T23:45:49Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-29T23:45:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jinx2321/nllb-tagged-1e4-paper-2
jinx2321
2025-05-29T23:43:48Z
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/nllb-tagged-1e4-paper", "base_model:finetune:jinx2321/nllb-tagged-1e4-paper", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-29T21:48:58Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: jinx2321/nllb-tagged-1e4-paper tags: - generated_from_trainer model-index: - name: nllb-tagged-1e4-paper-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-tagged-1e4-paper-2 This model is a fine-tuned version of [jinx2321/nllb-tagged-1e4-paper](https://huggingface.co/jinx2321/nllb-tagged-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
SirPeaves/X16
SirPeaves
2025-05-29T23:39:03Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-29T23:39:03Z
--- license: apache-2.0 ---
mlx-community/DeepSeek-R1-0528-3bit
mlx-community
2025-05-29T23:37:53Z
0
0
mlx
[ "mlx", "safetensors", "deepseek_v3", "text-generation", "conversational", "custom_code", "base_model:deepseek-ai/DeepSeek-R1-0528", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528", "license:mit", "3-bit", "region:us" ]
text-generation
2025-05-29T22:25:50Z
--- license: mit library_name: mlx pipeline_tag: text-generation tags: - mlx base_model: deepseek-ai/DeepSeek-R1-0528 --- # mlx-community/DeepSeek-R1-0528-3bit This model [mlx-community/DeepSeek-R1-0528-3bit](https://huggingface.co/mlx-community/DeepSeek-R1-0528-3bit) was converted to MLX format from [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) using mlx-lm version **0.24.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/DeepSeek-R1-0528-3bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
AmberYifan/Llama-2-13b-sft-SPIN-Llama-2-70b-chat-hf
AmberYifan
2025-05-29T23:30:56Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T23:01:42Z
--- base_model: AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-2-13b-sft-SPIN-Llama-2-70b-chat-hf tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Llama-2-13b-sft-SPIN-Llama-2-70b-chat-hf This model is a fine-tuned version of [AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-2-13b-sft-SPIN-Llama-2-70b-chat-hf", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/1gdgu6m0) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rtl-llm/qwen2.5coder-7b-origen-verilog-truncate
rtl-llm
2025-05-29T23:27:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T23:24:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jinx2321/nllb-tagged-1e4-paper-distilled-1
jinx2321
2025-05-29T23:25:59Z
0
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:jinx2321/nllb-tagged-1e4-paper", "base_model:finetune:jinx2321/nllb-tagged-1e4-paper", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-29T21:59:22Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: jinx2321/nllb-tagged-1e4-paper tags: - generated_from_trainer model-index: - name: nllb-tagged-1e4-paper-distilled-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-tagged-1e4-paper-distilled-1 This model is a fine-tuned version of [jinx2321/nllb-tagged-1e4-paper](https://huggingface.co/jinx2321/nllb-tagged-1e4-paper) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
shayanfirouzian/Llama-3.2-3B-DPO-SocialReasoning
shayanfirouzian
2025-05-29T23:23:33Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "dpo", "en", "base_model:unsloth/Llama-3.2-3B", "base_model:finetune:unsloth/Llama-3.2-3B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T23:19:37Z
--- base_model: unsloth/Llama-3.2-3B tags: - text-generation-inference - transformers - unsloth - llama - trl - dpo license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** shayanfirouzian - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Hsianchengfun/merged_model_WOQ_epoch1321
Hsianchengfun
2025-05-29T23:22:32Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T23:19:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
halimalm/vogue
halimalm
2025-05-29T23:20:37Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-29T20:29:04Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vogue results: - task: type: image-classification name: Image Classification dataset: name: vogue-designer-looks type: imagefolder config: default split: train args: default metrics: - type: accuracy value: 0.8654970760233918 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vogue This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the vogue-designer-looks dataset. It achieves the following results on the evaluation set: - Loss: 0.7083 - Accuracy: 0.8655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 86 | 1.2303 | 0.6 | | 1.491 | 2.0 | 172 | 0.9322 | 0.7706 | | 0.9886 | 3.0 | 258 | 0.7960 | 0.8235 | | 0.7985 | 4.0 | 344 | 0.7546 | 0.8471 | | 0.6993 | 5.0 | 430 | 0.7032 | 0.8765 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0 - Datasets 3.4.1 - Tokenizers 0.21.1
abhikapoor909/vitmanu1b4-16q
abhikapoor909
2025-05-29T23:09:05Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T23:08:16Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** abhikapoor909 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Qwen2.5-14B-sft-all-pool-GGUF
mradermacher
2025-05-29T23:09:04Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "dpo", "en", "base_model:AmberYifan/Qwen2.5-14B-sft-all-pool", "base_model:quantized:AmberYifan/Qwen2.5-14B-sft-all-pool", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T22:46:54Z
--- base_model: AmberYifan/Qwen2.5-14B-sft-all-pool language: - en library_name: transformers model_name: Qwen2.5-14B-sft-all-pool quantized_by: mradermacher tags: - generated_from_trainer - trl - dpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AmberYifan/Qwen2.5-14B-sft-all-pool <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-all-pool-GGUF/resolve/main/Qwen2.5-14B-sft-all-pool.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
PyThaGo/LeetSeek-R1ML32B
PyThaGo
2025-05-29T23:07:57Z
2
0
null
[ "gguf", "LLMLiT", "Romania", "LLM", "en", "ro", "dataset:LLMLit/LitSet", "base_model:PyThaGo/LLMLit", "base_model:quantized:PyThaGo/LLMLit", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-02-07T18:38:26Z
--- license: mit language: - en - ro base_model: - LLMLit/LLMLit tags: - LLMLiT - Romania - LLM datasets: - LLMLit/LitSet metrics: - accuracy - character - code_eval --- --- # **LeetSeek-R1ML32B – Model Card** --- 📌 *LLM multilingv de înaltă performanță pentru sarcini NLP în engleză și română* 🔗 [LLMLit pe Hugging Face](https://huggingface.co/LLMLit) 🔗 [LitSeekR1 pe Hugging Face](https://huggingface.co/LLMLit/LitSeekR1) --- ## **🔍 Rezumat rapid** --- **LLMLit** este un model de limbaj mare (LLM) performant, multilingv, optimizat din **Meta’s Llama 3.1 8B Instruct**. Este conceput pentru **task-uri NLP în limba engleză și română**, având capacități avansate de **urmărire a instrucțiunilor, înțelegere contextuală și generare de conținut precis**. De ce să alegi LeetSeek-R1ML32B ? **LeetSeek-R1ML32B** este o alegere excelentă pentru cei care doresc să ruleze modele AI puternice într-un mediu securizat 🔐 și privat. Având posibilitatea de a lucra complet offline 🌐❌, LLMLit îți oferă control total asupra datelor 🛡️, eliminând orice risc de scurgeri de informații sau dependență de conexiuni externe. Modelele sunt rulate local 🖥️, ceea ce asigură o performanță rapidă ⚡ și o protecție sporită a confidențialității 🔒, fiind ideal pentru aplicații sensibile și scenarii unde securitatea datelor este esențială. În plus, cu LLMLit, nu trebuie să te îngrijorezi de problemele de confidențialitate asociate serviciilor bazate pe cloud ☁️🚫. 🎉 Open-Source și Gratuit: LLMLit este un proiect open-source 💻, ceea ce înseamnă că poți personaliza și adapta modelele conform nevoilor tale. Nu există taxe ascunse și ai acces complet la codul sursă pentru a-l integra în aplicațiile tale 🛠️. ## **📌 Model Details** 🔹 **Descriere:** LLMLit poate fi utilizat pentru **generare de conținut, sumarizare, răspuns la întrebări și multe altele**. 🔹 **Fine-tuning:** Modelul a fost antrenat pentru **adherarea la instrucțiuni de înaltă calitate și o mai bună înțelegere a contextului**. 🔹 **Utilizatori țintă:** Dezvoltatori, cercetători și companii care au nevoie de **soluții NLP fiabile**. | Caracteristici | Detalii | |----------------|---------| | 🏢 **Dezvoltat de** | PyThaGo.AI Development Team | | 💰 **Finanțare** | Contribuții open-source & sponsori privați | | 🌍 **Limbaje** | Engleză (en), Română (ro) | | 🏷 **Licență** | MIT | | 🔗 **Model de bază** | `Qwen-32B-Instruct` | | 📂 **Resurse** | [GitHub Repository](#) / Paper: *To be published* | | 🚀 **Demo** | *Coming Soon* | ### **Sistem Recomandat pentru LeetSeek-R1ML32B – Performanță Echilibrată** 🔹 Estimare Performanță: ~10-20 tokens/sec + interacțiune 3D în timp real | Componentă | Model Recomandat | Specificații Cheie | Emojis | |------------------|-----------------------------------|--------------------------------------------|-------------| | **Procesor (CPU)** | AMD Ryzen 7 7800X3D / Intel i7-13700K | 8C/16T, 5.0 GHz boost, cache mare | ⚡🖥️ | | **Placă Video (GPU)** | NVIDIA RTX 4070 / AMD RX 7800 XT | 12GB GDDR6, AI cores, DLSS 3 | 🎮🚀 | | **Memorie RAM** | 32GB DDR5 5600MHz (Corsair / Kingston) | Dual-Channel, CL30, XMP 3.0 | 💾🔥 | | **Stocare (SSD)** | 1TB NVMe Gen4 (Samsung 980 Pro) | 7000 MB/s Read, 5000 MB/s Write | 💽⚡ | | **Placă de bază** | MSI B650 Tomahawk Wi-Fi | PCIe 4.0, Wi-Fi 6E, USB-C | 🔩📡 | | **Sistem de operare** | Windows 11 Pro / Ubuntu 22.04 | Optimizat pentru AI și productivitate | 🖥️🛠️ | 🔹 **Acest sistem este ideal pentru rularea LLMLit fără probleme, oferind un echilibru perfect între performanță și eficiență.** ## **🚀 Cum să începi cu LLMLit** 📌 [Ghid de Instalare: LM Studio + LLMLit pe Windows](https://huggingface.co/LLMLit/LLMLit/discussions/3#679bfd7646a549e46dd7f784) 📌 [Ghid de Instalare: Ollama și să rulezi LLMLit de pe Hugging Face](https://huggingface.co/LLMLit/LLMLit/discussions/3#679bfd7646a549e46dd7f784) 📌 [Ghid de Pași pentru implementarea RAG cu LLMLit pe Hugging Face 🚀 ](https://huggingface.co/LLMLit/LLMLit/discussions/2#679bfd1a9141a5524a4994b7) Pentru a utiliza LLMLit, instalează librăriile necesare și încarcă modelul: ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Încarcă modelul și tokenizer-ul model = AutoModelForCausalLM.from_pretrained("llmlit/LitSeek-R1ML-32B") tokenizer = AutoTokenizer.from_pretrained("llmlit/LitSeek-R1ML-32B") # Generează text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) ``` --- Sper că acest ghid îți va fi de ajutor! 😊 --- ### **Coming Soon: Modele de Generare Imagine și Video 🎨🎬** --- **Modelele de generare imagine și video sunt pe cale să fie lansate!** Așteaptă-te la posibilități creative nelimitate. Iată modelele ce vor fi disponibile curând: | Modele | Descriere | Status | Data estimată | |---------------------|-------------------------------------------------------|----------------|---------------| | **LitImage** | Generare de imagini detaliate pe baza de text | **Coming Soon** | Martie 2025 | | **LitVideo** | Creare de clipuri video pe baza descrierilor textuale | **Coming Soon** | Aprilie 2025 | | **LitArt** | Transformă pozele în opere de artă (stil artistic) | **Coming Soon** | Mai 2025 | | **LitAgent** | Creare unui agent de browser folosind AI | **Coming Soon** | Iunie 2025 | | **LitWave** | Generare de video pe bază de muzică și text | **Coming Soon** | Iulie 2025 | | **Model de Sănătate** | Analizează date medicale și sugerează tratamente | 🔜 Coming Soon | 🏥💉 | | **Model de Marketing**| Crează campanii publicitare personalizate | 🔜 Coming Soon | 📈📢 | | **Model Legal** | Redactează documente legale și oferă consultanță juridică |🔜 Coming Soon | ⚖️📑 | | **Model Educațional** | Personalizează lecțiile și teste pentru studenți | 🔜 Coming Soon | 🎓📚 | | **Model HR** | Ajută la recrutare, evaluare și gestionare angajați | 🔜 Coming Soon | 👥💼 | | **Model de Cod** | Ajută dezvoltatorii să scrie și să debug-eze cod | 🔜 Coming Soon | 💻⚙️ | |---------------------- |---------------------------------------------------------|--------------|--------------- | | Modele Premium | Descriere | Status | Emojis | |---------------------- |---------------------------------------------------------|--------------|--------------- | | **Model Financiar** | Analizează piețele financiare și oferă sfaturi de investiții | 🔜 Coming Soon | 💰📊 | | **Model Blockchain** | Crează smart contracts și analizează piețele DeFi | 🔜 Coming Soon | ⛓️💹 | --- ### **🚀 Coming Soon!** **Noua funcționalitate** va fi disponibilă curând! Pregătește-te să explorezi opțiuni personalizate pentru aplicațiile tale. Iată câteva dintre caracteristicile ce urmează să fie integrate: --- #### **🌟 Funcționalități Planificate:** | **Parametru** | **Descriere** | **Status** | **Data estimată** | |-----------------------------|---------------------------------------------------------|-----------------|-------------------| | **🛠️ Low-Code Builder** | Crează aplicații fără a scrie mult cod | **Coming Soon** | Martie 2025 | | **🤖 AI Integration** | Integrare completă cu modelele AI | **Coming Soon** | Aprilie 2025 | | **🎙️ Voice Control** | Suport complet pentru comenzi vocale | **Coming Soon** | Mai 2025 | | **🔄 RAG Support** | Generare augmentată prin recuperare de informații | **Coming Soon** | Iunie 2025 | | **🎨 Teme și Agenti** | Theme și chatbots multi-AI pentru asistență personalizată | **Coming Soon** | Iunie 2025 | 🔧 **Rămâi conectat!** Detaliile suplimentare vor fi disponibile foarte curând! --- ### **🌐 Metavers AI Assistant with LLMLit** 🤖 Aplicația **"Metavers AI Assistant with LLMLit"** va integra tehnologia LLMLit în Metavers pentru a crea un asistent virtual interactiv și personalizat. În acest mediu 3D imersiv, accesibil prin WebXR, asistentul va interacționa cu utilizatorii în timp real, înțelegând întrebări complexe și oferind recomandări personalizate, într-o manieră naturală și fluidă. --- ### **🌍 IoT AI Assistant with LLMLit** 🧠 **Descriere:** **"IoT AI Assistant with LLMLit"** va combina puterea LLMLit cu Internet of Things (IoT) pentru a crea un asistent virtual avansat. Acesta va putea să înțeleagă întrebări complexe, să ofere recomandări personalizate și să controleze dispozitive IoT în timp real. Cu suport pentru interacțiune vocală și text, asistentul va îmbunătăți eficiența și automatizarea în medii smart home, industriale și de business. --- --- ### **Alătură-te Comunității PyThaGo.AI! 🚀** --- Suntem încântați să îți prezentăm **PyThaGo.AI**, o comunitate vibrantă dedicată inovației și colaborării în domeniul inteligenței artificiale! Dacă ești un dezvoltator pasionat de AI și vrei să contribui la proiecte open-source care vor transforma viitorul tehnologiei, te invităm să te alături echipei noastre. Proiectele noastre sunt deschise oricui dorește să contribuie, de la dezvoltatori experimentați până la începători care doresc să învețe și să crească împreună cu noi. Alătură-te astăzi și ajută-ne să construim următoarele inovații AI! Iată câteva dintre proiectele noastre la care poți contribui: ![Civis3.gif](https://cristiansas.com/storage/agentweb-1.gif) | **Proiect** | **Descriere** | **Link** | |----------------------|----------------------------------------------------|---------------------------------------------------| | **AgentWeb-ui** | Interacțiune directă cu browseru prin web simplu | [GitHub](https://github.com/PyThaGoAI/AgentWeb-ui) | | **ChatLit** | Chatbot multi-AI pentru suport și asistență | [GitHub](https://github.com/PyThaGoAI/ChatLit) | | **Morphic** | Platformă flexibilă pentru aplicații AI | [GitHub](https://github.com/PyThaGoAI/morphic) | | **Bolt.new** | Aplicație rapidă pentru integrarea agenților AI | [GitHub](https://github.com/PyThaGoAI/bolt.new) | | **LibreChat** | Chatbot multi-AI, perfect pentru integrare scalabilă| [GitHub](https://github.com/PyThaGoAI/LibreChat) | | **Langflow** | Platformă low-code pentru aplicații personalizate | [GitHub](https://github.com/PyThaGoAI/langflow) | | **NextChat** | Aplicație de conversație cross-platform | [GitHub](https://github.com/PyThaGoAI/NextChat) | | **VoiceLit** | Suport complet pentru interacțiune vocală | [GitHub](https://github.com/PyThaGoAI/VoiceLit) | | **Plandex** | Planificator AI pentru gestionarea sarcinilor | [GitHub](https://github.com/PyThaGoAI/plandex) | | **Web-llm-chat** | Run LLMLit direct în browser pentru performanță maximă | [GitHub](https://github.com/PyThaGoAI/web-llm-chat) | 🚀 **Fii parte din revoluția AI!** Începe să contribui astăzi la dezvoltarea unora dintre cele mai interesante proiecte open-source din domeniul AI și hai să construim împreună un viitor mai inteligent! 🎥 **Transformă-ți ideile în realitate!** Modelele noastre de generare îți vor permite să creezi imagini și video într-un mod rapid și inovator! ## **💡 Utilizări principale** ### ✅ **Utilizare directă** LLMLit poate fi aplicat la: ✔️ Generarea de răspunsuri asemănătoare celor umane ✔️ Traducere între **engleză și română** ✔️ Sumarizarea articolelor, rapoartelor și documentelor ✔️ Răspuns la întrebări complexe cu sensibilitate la context ### 🚀 **Utilizare avansată (fine-tuning & integrare)** LLMLit poate fi optimizat pentru: 🗨️ **Chatboți & asistenți virtuali** 📚 **Instrumente educaționale bilingve** ⚖️ **Analiza documentelor legale/medicale** 🛒 **Automatizare în e-commerce & suport clienți** ### ❌ **Utilizări nerecomandate** ⛔ Aplicații neetice (dezinformare, manipulare) ⛔ Luarea deciziilor critice fără supervizare umană ⛔ Task-uri care necesită **performanță în timp real** --- ## **⚠️ Bias, Riscuri și Limitări** 🔍 **Bias:** Modelul poate reflecta bias-urile existente în datele de antrenament. ⚠️ **Riscuri:** Poate genera informații inexacte sau neconforme. 📌 **Limitări:** - Performanța depinde de **calitatea prompturilor**. - Înțelegere limitată a domeniilor **foarte tehnice sau de nișă**. 🔹 **Recomandări:** ✔️ Revizuirea output-ului pentru **aplicații sensibile**. ✔️ Fine-tuning pentru sarcini specifice pentru **minimizarea riscurilor**. --- Multe surprize în viitor! 🎁✨ Suntem super entuziasmați să vă anunțăm că, în curând, vom adăuga multe freebies și documentație detaliată pentru toți dezvoltatorii care vor să învețe și să colaboreze cu noi! 📚🎉 🔧 Ce vei găsi? Resurse gratuite pentru proiectele tale 💡 Ghiduri și tutoriale pas cu pas 📘 Exemple de cod și șabloane utile 📝 🌍 Rămâi conectat pentru a descoperi toate aceste resurse care te vor ajuta să îți duci proiectele la următorul nivel! Așteptăm cu nerăbdare să lucrăm împreună și să facem pași mari în dezvoltarea AI-ului! 🌍✨ ![Civis3.png](https://cdn-uploads.huggingface.co/production/uploads/6769b18893c0c9156b8265d5/pZch1_YVa6Ixc3d_eYxBR.png) ---
Triangle104/The-Omega-Directive-M-12B-v1.0-Q4_K_S-GGUF
Triangle104
2025-05-29T23:06:24Z
24
0
null
[ "gguf", "nsfw", "explicit", "roleplay", "unaligned", "dangerous", "ERP", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:ReadyArt/The-Omega-Directive-M-12B-v1.0", "base_model:finetune:ReadyArt/The-Omega-Directive-M-12B-v1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-04T00:56:58Z
--- license: apache-2.0 language: - en base_model: ReadyArt/The-Omega-Directive-M-12B-v1.0 base_model_relation: finetune pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - dangerous - ERP - llama-cpp - gguf-my-repo --- # Triangle104/The-Omega-Directive-M-12B-v1.0-Q4_K_S-GGUF This model was converted to GGUF format from [`ReadyArt/The-Omega-Directive-M-12B-v1.0`](https://huggingface.co/ReadyArt/The-Omega-Directive-M-12B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ReadyArt/The-Omega-Directive-M-12B-v1.0) for more details on the model. --- This evolution of Forgotten-Safeword delivers coherent depravity with unprecedented immersion: - 🧬 Expanded 22M Token Dataset - Incorporating 90 erotic novels and 6,496 kink scenarios - ⚡ Optimized Architecture - Smoother training curve yields more intelligent outputs - 💎 Balanced Depravity - Retains Forgotten-Safeword's edge while reducing jarring inconsistencies - 📜 Enhanced Character Piloting - Characters exhibit more nuanced personalities and motivations - 🌹 Unexpected Depth - Occasionally surprises with profound insights amidst the debauchery --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/The-Omega-Directive-M-12B-v1.0-Q4_K_S-GGUF --hf-file the-omega-directive-m-12b-v1.0-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/The-Omega-Directive-M-12B-v1.0-Q4_K_S-GGUF --hf-file the-omega-directive-m-12b-v1.0-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/The-Omega-Directive-M-12B-v1.0-Q4_K_S-GGUF --hf-file the-omega-directive-m-12b-v1.0-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/The-Omega-Directive-M-12B-v1.0-Q4_K_S-GGUF --hf-file the-omega-directive-m-12b-v1.0-q4_k_s.gguf -c 2048 ```
Darkhn/L3.3-70B-Amalgamma-V10
Darkhn
2025-05-29T23:02:04Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2406.11617", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:31:38Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # merged_model_output This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using /media/administrator/oiseauxai1data/modelweights/Smart-base-V2 as a base. ### Models Merged The following models were included in the merge: * /media/administrator/oiseauxai1data1/Dark-Base-V3 * /media/administrator/oiseauxai1data/modelweights/Middle-Base-V3 * /media/administrator/oiseauxai1data/modelweights/Story-Base-V3 ### Configuration The following YAML configuration was used to produce this model: ```yaml # --- Mergekit Example: della_linear --- # Method: Implements the DELLA concept (Deep Ensembling with Layer-wise Linear Averaging). # This typically involves a sophisticated layer-wise linear combination of models. base_model: /media/administrator/oiseauxai1data/modelweights/Smart-base-V2 # The foundational model models: - model: /media/administrator/oiseauxai1data1/Dark-Base-V3 parameters: weight: [0.3, 0.2, 0.5] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5] density: 0.60 # Sparsity/pruning factor for this model's contribution. epsilon: 0.15 # Single epsilon for the pruning - model: /media/administrator/oiseauxai1data/modelweights/Story-Base-V3 parameters: weight: [0.4, 0.3, 0.3] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5] density: 0.50 # Sparsity/pruning factor for this model's contribution. epsilon: 0.15 # Single epsilon for the pruning - model: /media/administrator/oiseauxai1data/modelweights/Middle-Base-V3 parameters: weight: [0.3, 0.5, 0.2] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5] density: 0.50 # Sparsity/pruning factor for this model's contribution. epsilon: 0.15 # Single epsilon for the pruning model_name: L3.3-70B-Amalgamma-V10 # Name of your merge dtype: float32 # Input size float32, float16, bfloat16 out_dtype: bfloat16 # output size float32, float16, bfloat16 merge_method: della parameters: normalize: false # If true (default), weights are normalized to sum to 1. # If false, absolute weights are used. lambda: 1.11 # Single lambda for scaling the final merged deltas tokenizer_source: union # Or 'base' if base_model is set, or 'union', careful with this one chat_template: llama3 # Template for chat (Chatml, llama3, etc...) license: apache-2.0 # License type ```
YShynkarov/ukr-roberta-cosmus-sentiment
YShynkarov
2025-05-29T23:00:25Z
0
0
null
[ "sentiment", "ukrainian", "socialmedia", "uk", "ru", "dataset:YShynkarov/COSMUS", "base_model:youscan/ukr-roberta-base", "base_model:finetune:youscan/ukr-roberta-base", "license:mit", "region:us" ]
null
2025-05-29T22:37:25Z
--- license: mit language: - uk - ru metrics: - accuracy - f1 base_model: - youscan/ukr-roberta-base tags: - sentiment - ukrainian - socialmedia datasets: - YShynkarov/COSMUS ---
CelestialWandererOfTheVoid/SR-second-task
CelestialWandererOfTheVoid
2025-05-29T23:00:18Z
0
0
null
[ "region:us" ]
null
2025-05-29T22:57:59Z
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively. This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed. To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt. Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html). Verify that the CDI specification was done correctly with: ``` $ nvidia-ctk cdi list ``` You should see this in your output: ``` nvidia.com/gpu=all nvidia.com/gpu=0 ``` If you are running podman as root, run the following command to start the container: Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` If you are running the container rootless, there are a few more changes to make: First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters: ``` [nvidia-container-cli] no-cgroups = true [nvidia-container-runtime] debug = "/tmp/nvidia-container-runtime.log" ``` You can also run the following command to achieve the same result: ``` $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place ``` Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` Running the container will spin up an API with the following endpoints: 1. `/status/` : Communicates API status 2. `/prepare/` : Download model checkpoint and initialize model 3. `/upload-audio/` : Upload audio files, save to noisy audio directory 4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory 5. `/download-enhanced/` : Download enhanced audio files By default the API will use host `0.0.0.0` and port `6500`. ### References 1. **Welker, Simon; Richter, Julius; Gerkmann, Timo** *Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*. Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932. [DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653) 2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo** *Speech Enhancement and Dereverberation with Diffusion-based Generative Models*. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364. [DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241) 3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo** *EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*. Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
CelestialWandererOfTheVoid/SR-first-task
CelestialWandererOfTheVoid
2025-05-29T22:57:02Z
0
0
null
[ "region:us" ]
null
2025-05-29T22:54:40Z
# Container Template for SoundsRight Subnet Miners This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively. This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed. To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt. Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html). Verify that the CDI specification was done correctly with: ``` $ nvidia-ctk cdi list ``` You should see this in your output: ``` nvidia.com/gpu=all nvidia.com/gpu=0 ``` If you are running podman as root, run the following command to start the container: Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` If you are running the container rootless, there are a few more changes to make: First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters: ``` [nvidia-container-cli] no-cgroups = true [nvidia-container-runtime] debug = "/tmp/nvidia-container-runtime.log" ``` You can also run the following command to achieve the same result: ``` $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place ``` Run the container with: ``` podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi ``` Access logs with: ``` podman logs -f modelapi ``` Running the container will spin up an API with the following endpoints: 1. `/status/` : Communicates API status 2. `/prepare/` : Download model checkpoint and initialize model 3. `/upload-audio/` : Upload audio files, save to noisy audio directory 4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory 5. `/download-enhanced/` : Download enhanced audio files By default the API will use host `0.0.0.0` and port `6500`. ### References 1. **Welker, Simon; Richter, Julius; Gerkmann, Timo** *Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*. Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932. [DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653) 2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo** *Speech Enhancement and Dereverberation with Diffusion-based Generative Models*. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364. [DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241) 3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo** *EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*. Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF
mradermacher
2025-05-29T22:51:28Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "dpo", "en", "base_model:AmberYifan/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct", "base_model:quantized:AmberYifan/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T22:28:44Z
--- base_model: AmberYifan/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct language: - en library_name: transformers model_name: Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct quantized_by: mradermacher tags: - generated_from_trainer - trl - dpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AmberYifan/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q2_K.gguf) | Q2_K | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct-GGUF/resolve/main/Qwen2.5-14B-sft-SPIN-Qwen2.5-72B-Instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
sajelian/q-Taxi-v3
sajelian
2025-05-29T22:50:30Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-29T22:50:27Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.76 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="sajelian/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
FormlessAI/b3236ab7-a5db-4dac-bc3b-d5ccc887ad47
FormlessAI
2025-05-29T22:46:32Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T20:14:41Z
--- base_model: Qwen/Qwen2-1.5B-Instruct library_name: transformers model_name: b3236ab7-a5db-4dac-bc3b-d5ccc887ad47 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for b3236ab7-a5db-4dac-bc3b-d5ccc887ad47 This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/b3236ab7-a5db-4dac-bc3b-d5ccc887ad47", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/p97u8i3c) This model was trained with SFT. ### Framework versions - TRL: 0.18.0 - Transformers: 4.52.3 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q6_K-GGUF
Triangle104
2025-05-29T22:41:52Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "moe", "mixture of experts", "merge", "llama-3", "llama3", "llama-cpp", "gguf-my-repo", "base_model:DavidAU/L3-MOE-4X8B-Grand-Horror-25B", "base_model:quantized:DavidAU/L3-MOE-4X8B-Grand-Horror-25B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T22:36:35Z
--- library_name: transformers tags: - mergekit - moe - mixture of experts - merge - llama-3 - llama3 - llama-cpp - gguf-my-repo base_model: DavidAU/L3-MOE-4X8B-Grand-Horror-25B --- # Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q6_K-GGUF This model was converted to GGUF format from [`DavidAU/L3-MOE-4X8B-Grand-Horror-25B`](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) for more details on the model. --- It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine Dark/Horror models models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B). This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional. It excels at description, dialog, imagery, metaphors, and prose - and shows great variations in sentence / paragraph size, length, and composition. It is also not afraid, and will not pull its punches. And it has a sense of humor too. It can do horror just as easily as it can do romance. Most notably dialog is very "un-ai" like, combined with prose (short, and terse at times). (lots of different examples below, including 2, 3 and 4 experts and different genres) And it is fast: 34 t/s (2 experts) on a low end 16GB card, Q3KS. Double this speed for standard/mid-range video cards. Model can be used also for all genres (examples below showing this). This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5. It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct). It is for any writing, fiction or roleplay activity. It requires Llama3 template and/or "Command-R" template. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q6_K-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q6_K-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q6_K-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q6_K-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q6_k.gguf -c 2048 ```
prakod/codemix-test
prakod
2025-05-29T22:39:54Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:ai4bharat/IndicBART", "base_model:finetune:ai4bharat/IndicBART", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-16T08:52:24Z
--- library_name: transformers base_model: ai4bharat/IndicBART tags: - generated_from_trainer model-index: - name: codemix-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codemix-test This model is a fine-tuned version of [ai4bharat/IndicBART](https://huggingface.co/ai4bharat/IndicBART) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
ellietang/saved_lora_ls-model-14B-full-CPT-v0.0.9-4bits-trained-0529
ellietang
2025-05-29T22:39:51Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-29T05:50:16Z
--- base_model: unsloth/qwen2.5-coder-14b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ellietang - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-14b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
emersonrosaoficial/emersonrosa-lorav2
emersonrosaoficial
2025-05-29T22:38:16Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-29T21:42:01Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
BootesVoid/cmb9wv8110jqh1b1ycne89nkr_cmb9x1crp0jry1b1yuz0j2qed
BootesVoid
2025-05-29T22:34:21Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-29T22:34:12Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: lola --- # Cmb9Wv8110Jqh1B1Ycne89Nkr_Cmb9X1Crp0Jry1B1Yuz0J2Qed <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `lola` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "lola", "lora_weights": "https://huggingface.co/BootesVoid/cmb9wv8110jqh1b1ycne89nkr_cmb9x1crp0jry1b1yuz0j2qed/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmb9wv8110jqh1b1ycne89nkr_cmb9x1crp0jry1b1yuz0j2qed', weight_name='lora.safetensors') image = pipeline('lola').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmb9wv8110jqh1b1ycne89nkr_cmb9x1crp0jry1b1yuz0j2qed/discussions) to add images that show off what you’ve made with this LoRA.
AmberYifan/Llama-2-13b-sft-spin-10k
AmberYifan
2025-05-29T22:30:06Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF", "base_model:finetune:AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T22:01:24Z
--- base_model: AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF library_name: transformers model_name: Llama-2-13b-sft-spin-10k tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Llama-2-13b-sft-spin-10k This model is a fine-tuned version of [AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-2-13b-sft-ultrachat-safeRLHF). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AmberYifan/Llama-2-13b-sft-spin-10k", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/c341mjfy) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_M-GGUF
Triangle104
2025-05-29T22:27:47Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "moe", "mixture of experts", "merge", "llama-3", "llama3", "llama-cpp", "gguf-my-repo", "base_model:DavidAU/L3-MOE-4X8B-Grand-Horror-25B", "base_model:quantized:DavidAU/L3-MOE-4X8B-Grand-Horror-25B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T22:08:19Z
--- library_name: transformers tags: - mergekit - moe - mixture of experts - merge - llama-3 - llama3 - llama-cpp - gguf-my-repo base_model: DavidAU/L3-MOE-4X8B-Grand-Horror-25B --- # Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_M-GGUF This model was converted to GGUF format from [`DavidAU/L3-MOE-4X8B-Grand-Horror-25B`](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) for more details on the model. --- It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine Dark/Horror models models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B). This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional. It excels at description, dialog, imagery, metaphors, and prose - and shows great variations in sentence / paragraph size, length, and composition. It is also not afraid, and will not pull its punches. And it has a sense of humor too. It can do horror just as easily as it can do romance. Most notably dialog is very "un-ai" like, combined with prose (short, and terse at times). (lots of different examples below, including 2, 3 and 4 experts and different genres) And it is fast: 34 t/s (2 experts) on a low end 16GB card, Q3KS. Double this speed for standard/mid-range video cards. Model can be used also for all genres (examples below showing this). This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5. It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct). It is for any writing, fiction or roleplay activity. It requires Llama3 template and/or "Command-R" template. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_M-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_M-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_M-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_M-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q5_k_m.gguf -c 2048 ```
gradientrouting-spar/base_2d_first_quadrant_red_no_preamble_20250529_221956
gradientrouting-spar
2025-05-29T22:26:28Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T22:23:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vijayarulmuthu/finetuned_arctic_ft-a85433c9-6284-4afb-8e87-e110823d565c
vijayarulmuthu
2025-05-29T22:25:55Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:6612", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-m", "base_model:finetune:Snowflake/snowflake-arctic-embed-m", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-29T22:20:47Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:6612 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-m widget: - source_sentence: Who besought that the words might be preached to them the next sabbath? sentences: - 'But the midwives feared God, and did not as the king of Egypt commanded them, but saved the men children alive. And the king of Egypt called for the midwives, and said unto them, Why have ye done this thing, and have saved the men children alive? And the midwives said unto Pharaoh, Because the Hebrew women [are] not as the Egyptian women; for they [are] lively, and are delivered ere the midwives come in unto them. Therefore God dealt well with the midwives: and the people multiplied, and waxed very mighty. And it came to pass, because the midwives feared God, that he made them houses. And Pharaoh charged all his people, saying, Every son that is born ye shall cast into the river, and every daughter ye shall save alive.' - 'And the watchman cried, and told the king. And the king said, If he [be] alone, [there is] tidings in his mouth. And he came apace, and drew near. And the watchman saw another man running: and the watchman called unto the porter, and said, Behold [another] man running alone. And the king said, He also bringeth tidings. And the watchman said, Me thinketh the running of the foremost is like the running of Ahimaaz the son of Zadok. And the king said, He [is] a good man, and cometh with good tidings. And Ahimaaz called, and said unto the king, All is well. And he fell down to the earth upon his face before the king, and said, Blessed [be] the LORD thy God, which hath delivered up the men that lifted up their hand against my lord the king. And the king said, [Is] the young man Absalom safe? And Ahimaaz answered, When Joab sent the king’s servant, and [me] thy servant, I saw a great tumult, but I knew not what [it was]. And the king said [unto him], Turn aside, [and] stand here. And he turned aside, and stood still. And, behold, Cushi came; and Cushi said, Tidings, my lord the king: for the LORD hath avenged thee this day of all them that rose up against thee. And the king said unto Cushi, [Is] the young man Absalom safe? And Cushi answered, The enemies of my lord the king, and all that rise against thee to do [thee] hurt, be as [that] young man [is].' - 'Behold, ye despisers, and wonder, and perish: for I work a work in your days, a work which ye shall in no wise believe, though a man declare it unto you. And when the Jews were gone out of the synagogue, the Gentiles besought that these words might be preached to them the next sabbath. Now when the congregation was broken up, many of the Jews and religious proselytes followed Paul and Barnabas: who, speaking to them, persuaded them to continue in the grace of God. And the next sabbath day came almost the whole city together to hear the word of God. But when the Jews saw the multitudes, they were filled with envy, and spake against those things which were spoken by Paul, contradicting and blaspheming. Then Paul and Barnabas waxed bold, and said, It was necessary that the word of God should first have been spoken to you: but seeing ye put it from you, and judge yourselves unworthy of everlasting life, lo, we turn to the Gentiles. For so hath the Lord commanded us, [saying], I have set thee to be a light of the Gentiles, that thou shouldest be for salvation unto the ends of the earth. And when the Gentiles heard this, they were glad, and glorified the word of the Lord: and as many as were ordained to eternal life believed.' - source_sentence: What will the LORD do if the people hearken unto Him and keep His commandments? sentences: - 'And ye shall not go out of the door of the tabernacle of the congregation [in] seven days, until the days of your consecration be at an end: for seven days shall he consecrate you. As he hath done this day, [so] the LORD hath commanded to do, to make an atonement for you. Therefore shall ye abide [at] the door of the tabernacle of the congregation day and night seven days, and keep the charge of the LORD, that ye die not: for so I am commanded. So Aaron and his sons did all things which the LORD commanded by the hand of Moses.' - 'Then all Israel gathered themselves to David unto Hebron, saying, Behold, we [are] thy bone and thy flesh. And moreover in time past, even when Saul was king, thou [wast] he that leddest out and broughtest in Israel: and the LORD thy God said unto thee, Thou shalt feed my people Israel, and thou shalt be ruler over my people Israel. Therefore came all the elders of Israel to the king to Hebron; and David made a covenant with them in Hebron before the LORD; and they anointed David king over Israel, according to the word of the LORD by Samuel. And David and all Israel went to Jerusalem, which [is] Jebus; where the Jebusites [were], the inhabitants of the land. And the inhabitants of Jebus said to David, Thou shalt not come hither. Nevertheless David took the castle of Zion, which [is] the city of David. And David said, Whosoever smiteth the Jebusites first shall be chief and captain. So Joab the son of Zeruiah went first up, and was chief. And David dwelt in the castle; therefore they called it the city of David. And he built the city round about, even from Millo round about: and Joab repaired the rest of the city.' - 'For I will have respect unto you, and make you fruitful, and multiply you, and establish my covenant with you. And ye shall eat old store, and bring forth the old because of the new. And I will set my tabernacle among you: and my soul shall not abhor you. And I will walk among you, and will be your God, and ye shall be my people. I [am] the LORD your God, which brought you forth out of the land of Egypt, that ye should not be their bondmen; and I have broken the bands of your yoke, and made you go upright. But if ye will not hearken unto me, and will not do all these commandments; And if ye shall despise my statutes, or if your soul abhor my judgments, so that ye will not do all my commandments, [but] that ye break my covenant: I also will do this unto you; I will even appoint over you terror, consumption, and the burning ague, that shall consume the eyes, and cause sorrow of heart: and ye shall sow your seed in vain, for your enemies shall eat it.' - source_sentence: How are the nations that fight against Ariel described in their outcome? sentences: - 'Then the LORD put forth his hand, and touched my mouth. And the LORD said unto me, Behold, I have put my words in thy mouth. See, I have this day set thee over the nations and over the kingdoms, to root out, and to pull down, and to destroy, and to throw down, to build, and to plant. Moreover the word of the LORD came unto me, saying, Jeremiah, what seest thou? And I said, I see a rod of an almond tree. Then said the LORD unto me, Thou hast well seen: for I will hasten my word to perform it. And the word of the LORD came unto me the second time, saying, What seest thou? And I said, I see a seething pot; and the face thereof [is] toward the north. Then the LORD said unto me, Out of the north an evil shall break forth upon all the inhabitants of the land. For, lo, I will call all the families of the kingdoms of the north, saith the LORD; and they shall come, and they shall set every one his throne at the entering of the gates of Jerusalem, and against all the walls thereof round about, and against all the cities of Judah. And I will utter my judgments against them touching all their wickedness, who have forsaken me, and have burned incense unto other gods, and worshipped the works of their own hands.' - 'Woe to Ariel, to Ariel, the city [where] David dwelt! add ye year to year; let them kill sacrifices. Yet I will distress Ariel, and there shall be heaviness and sorrow: and it shall be unto me as Ariel. And I will camp against thee round about, and will lay siege against thee with a mount, and I will raise forts against thee. And thou shalt be brought down, [and] shalt speak out of the ground, and thy speech shall be low out of the dust, and thy voice shall be, as of one that hath a familiar spirit, out of the ground, and thy speech shall whisper out of the dust. Moreover the multitude of thy strangers shall be like small dust, and the multitude of the terrible ones [shall be] as chaff that passeth away: yea, it shall be at an instant suddenly. Thou shalt be visited of the LORD of hosts with thunder, and with earthquake, and great noise, with storm and tempest, and the flame of devouring fire. And the multitude of all the nations that fight against Ariel, even all that fight against her and her munition, and that distress her, shall be as a dream of a night vision. It shall even be as when an hungry [man] dreameth, and, behold, he eateth; but he awaketh, and his soul is empty: or as when a thirsty man dreameth, and, behold, he drinketh; but he awaketh, and, behold, [he is] faint, and his soul hath appetite: so shall the multitude of all the nations be, that fight against mount Zion.' - And they were both naked, the man and his wife, and were not ashamed. - source_sentence: What will happen to the horn and arm of Moab according to the LORD? sentences: - 'The horn of Moab is cut off, and his arm is broken, saith the LORD. Make ye him drunken: for he magnified [himself] against the LORD: Moab also shall wallow in his vomit, and he also shall be in derision. For was not Israel a derision unto thee? was he found among thieves? for since thou spakest of him, thou skippedst for joy. O ye that dwell in Moab, leave the cities, and dwell in the rock, and be like the dove [that] maketh her nest in the sides of the hole’s mouth. We have heard the pride of Moab, (he is exceeding proud) his loftiness, and his arrogancy, and his pride, and the haughtiness of his heart. I know his wrath, saith the LORD; but [it shall] not [be] so; his lies shall not so effect [it]. Therefore will I howl for Moab, and I will cry out for all Moab; [mine heart] shall mourn for the men of Kirheres. O vine of Sibmah, I will weep for thee with the weeping of Jazer: thy plants are gone over the sea, they reach [even] to the sea of Jazer: the spoiler is fallen upon thy summer fruits and upon thy vintage.' - 'Wherefore be ye not unwise, but understanding what the will of the Lord [is]. And be not drunk with wine, wherein is excess; but be filled with the Spirit; Speaking to yourselves in psalms and hymns and spiritual songs, singing and making melody in your heart to the Lord; Giving thanks always for all things unto God and the Father in the name of our Lord Jesus Christ; Submitting yourselves one to another in the fear of God. Wives, submit yourselves unto your own husbands, as unto the Lord. For the husband is the head of the wife, even as Christ is the head of the church: and he is the saviour of the body. Therefore as the church is subject unto Christ, so [let] the wives [be] to their own husbands in every thing.' - 'And it came to pass, when the LORD would take up Elijah into heaven by a whirlwind, that Elijah went with Elisha from Gilgal. And Elijah said unto Elisha, Tarry here, I pray thee; for the LORD hath sent me to Bethel. And Elisha said [unto him, As] the LORD liveth, and [as] thy soul liveth, I will not leave thee. So they went down to Bethel. And the sons of the prophets that [were] at Bethel came forth to Elisha, and said unto him, Knowest thou that the LORD will take away thy master from thy head to day? And he said, Yea, I know [it]; hold ye your peace. And Elijah said unto him, Elisha, tarry here, I pray thee; for the LORD hath sent me to Jericho. And he said, [As] the LORD liveth, and [as] thy soul liveth, I will not leave thee. So they came to Jericho. And the sons of the prophets that [were] at Jericho came to Elisha, and said unto him, Knowest thou that the LORD will take away thy master from thy head to day? And he answered, Yea, I know [it]; hold ye your peace. And Elijah said unto him, Tarry, I pray thee, here; for the LORD hath sent me to Jordan. And he said, [As] the LORD liveth, and [as] thy soul liveth, I will not leave thee. And they two went on. And fifty men of the sons of the prophets went, and stood to view afar off: and they two stood by Jordan. And Elijah took his mantle, and wrapped [it] together, and smote the waters, and they were divided hither and thither, so that they two went over on dry ground.' - source_sentence: Whom did David smite and subdue, taking Gath and her towns from their control? sentences: - 'Remember, I beseech thee, that thou hast made me as the clay; and wilt thou bring me into dust again? Hast thou not poured me out as milk, and curdled me like cheese? Thou hast clothed me with skin and flesh, and hast fenced me with bones and sinews. Thou hast granted me life and favour, and thy visitation hath preserved my spirit. And these [things] hast thou hid in thine heart: I know that this [is] with thee. If I sin, then thou markest me, and thou wilt not acquit me from mine iniquity. If I be wicked, woe unto me; and [if] I be righteous, [yet] will I not lift up my head. [I am] full of confusion; therefore see thou mine affliction; For it increaseth. Thou huntest me as a fierce lion: and again thou shewest thyself marvellous upon me.' - 'Now after this it came to pass, that David smote the Philistines, and subdued them, and took Gath and her towns out of the hand of the Philistines. And he smote Moab; and the Moabites became David’s servants, [and] brought gifts. And David smote Hadarezer king of Zobah unto Hamath, as he went to stablish his dominion by the river Euphrates. And David took from him a thousand chariots, and seven thousand horsemen, and twenty thousand footmen: David also houghed all the chariot [horses], but reserved of them an hundred chariots. And when the Syrians of Damascus came to help Hadarezer king of Zobah, David slew of the Syrians two and twenty thousand men. Then David put [garrisons] in Syriadamascus; and the Syrians became David’s servants, [and] brought gifts. Thus the LORD preserved David whithersoever he went. And David took the shields of gold that were on the servants of Hadarezer, and brought them to Jerusalem. Likewise from Tibhath, and from Chun, cities of Hadarezer, brought David very much brass, wherewith Solomon made the brasen sea, and the pillars, and the vessels of brass.' - 'So Shishak king of Egypt came up against Jerusalem, and took away the treasures of the house of the LORD, and the treasures of the king’s house; he took all: he carried away also the shields of gold which Solomon had made. Instead of which king Rehoboam made shields of brass, and committed [them] to the hands of the chief of the guard, that kept the entrance of the king’s house. And when the king entered into the house of the LORD, the guard came and fetched them, and brought them again into the guard chamber. And when he humbled himself, the wrath of the LORD turned from him, that he would not destroy [him] altogether: and also in Judah things went well. So king Rehoboam strengthened himself in Jerusalem, and reigned: for Rehoboam [was] one and forty years old when he began to reign, and he reigned seventeen years in Jerusalem, the city which the LORD had chosen out of all the tribes of Israel, to put his name there. And his mother’s name [was] Naamah an Ammonitess. And he did evil, because he prepared not his heart to seek the LORD. Now the acts of Rehoboam, first and last, [are] they not written in the book of Shemaiah the prophet, and of Iddo the seer concerning genealogies? And [there were] wars between Rehoboam and Jeroboam continually. And Rehoboam slept with his fathers, and was buried in the city of David: and Abijah his son reigned in his stead.' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m results: - task: type: information-retrieval name: Information Retrieval dataset: name: validation type: validation metrics: - type: cosine_accuracy@1 value: 0.647912885662432 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8330308529945554 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8729582577132486 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9237749546279492 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.647912885662432 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2776769509981851 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17459165154264972 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.0923774954627949 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.017997580157289778 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.02313974591651543 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.024248840492034688 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.02566041540633193 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.17391225053060402 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7468556448592748 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.020838529961750816 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-m This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision fc74610d18462d218e312aa986ec5c8a75a98152 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("vijayarulmuthu/finetuned_arctic_ft-a85433c9-6284-4afb-8e87-e110823d565c") # Run inference sentences = [ 'Whom did David smite and subdue, taking Gath and her towns from their control?', 'Now after this it came to pass, that David smote the Philistines, and subdued them, and took Gath and her towns out of the hand of the Philistines. And he smote Moab; and the Moabites became David’s servants, [and] brought gifts. And David smote Hadarezer king of Zobah unto Hamath, as he went to stablish his dominion by the river Euphrates. And David took from him a thousand chariots, and seven thousand horsemen, and twenty thousand footmen: David also houghed all the chariot [horses], but reserved of them an hundred chariots. And when the Syrians of Damascus came to help Hadarezer king of Zobah, David slew of the Syrians two and twenty thousand men. Then David put [garrisons] in Syriadamascus; and the Syrians became David’s servants, [and] brought gifts. Thus the LORD preserved David whithersoever he went. And David took the shields of gold that were on the servants of Hadarezer, and brought them to Jerusalem. Likewise from Tibhath, and from Chun, cities of Hadarezer, brought David very much brass, wherewith Solomon made the brasen sea, and the pillars, and the vessels of brass.', 'So Shishak king of Egypt came up against Jerusalem, and took away the treasures of the house of the LORD, and the treasures of the king’s house; he took all: he carried away also the shields of gold which Solomon had made. Instead of which king Rehoboam made shields of brass, and committed [them] to the hands of the chief of the guard, that kept the entrance of the king’s house. And when the king entered into the house of the LORD, the guard came and fetched them, and brought them again into the guard chamber. And when he humbled himself, the wrath of the LORD turned from him, that he would not destroy [him] altogether: and also in Judah things went well. So king Rehoboam strengthened himself in Jerusalem, and reigned: for Rehoboam [was] one and forty years old when he began to reign, and he reigned seventeen years in Jerusalem, the city which the LORD had chosen out of all the tribes of Israel, to put his name there. And his mother’s name [was] Naamah an Ammonitess. And he did evil, because he prepared not his heart to seek the LORD. Now the acts of Rehoboam, first and last, [are] they not written in the book of Shemaiah the prophet, and of Iddo the seer concerning genealogies? And [there were] wars between Rehoboam and Jeroboam continually. And Rehoboam slept with his fathers, and was buried in the city of David: and Abijah his son reigned in his stead.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `validation` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6479 | | cosine_accuracy@3 | 0.833 | | cosine_accuracy@5 | 0.873 | | cosine_accuracy@10 | 0.9238 | | cosine_precision@1 | 0.6479 | | cosine_precision@3 | 0.2777 | | cosine_precision@5 | 0.1746 | | cosine_precision@10 | 0.0924 | | cosine_recall@1 | 0.018 | | cosine_recall@3 | 0.0231 | | cosine_recall@5 | 0.0242 | | cosine_recall@10 | 0.0257 | | **cosine_ndcg@10** | **0.1739** | | cosine_mrr@10 | 0.7469 | | cosine_map@100 | 0.0208 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 6,612 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 17.56 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 250.95 tokens</li><li>max: 504 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:---------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What was the reason given by Elijah the prophet for the LORD's punishment on Jehoram?</code> | <code>Then Jehoram went forth with his princes, and all his chariots with him: and he rose up by night, and smote the Edomites which compassed him in, and the captains of the chariots. So the Edomites revolted from under the hand of Judah unto this day. The same time [also] did Libnah revolt from under his hand; because he had forsaken the LORD God of his fathers. Moreover he made high places in the mountains of Judah, and caused the inhabitants of Jerusalem to commit fornication, and compelled Judah [thereto]. And there came a writing to him from Elijah the prophet, saying, Thus saith the LORD God of David thy father, Because thou hast not walked in the ways of Jehoshaphat thy father, nor in the ways of Asa king of Judah, But hast walked in the way of the kings of Israel, and hast made Judah and the inhabitants of Jerusalem to go a whoring, like to the whoredoms of the house of Ahab, and also hast slain thy brethren of thy father’s house, [which were] better than thyself: Behold, with a gre...</code> | | <code>What happened at the sixth hour until the ninth hour according to the passage?</code> | <code>And we indeed justly; for we receive the due reward of our deeds: but this man hath done nothing amiss. And he said unto Jesus, Lord, remember me when thou comest into thy kingdom. And Jesus said unto him, Verily I say unto thee, To day shalt thou be with me in paradise. And it was about the sixth hour, and there was a darkness over all the earth until the ninth hour. And the sun was darkened, and the veil of the temple was rent in the midst. And when Jesus had cried with a loud voice, he said, Father, into thy hands I commend my spirit: and having said thus, he gave up the ghost. Now when the centurion saw what was done, he glorified God, saying, Certainly this was a righteous man. And all the people that came together to that sight, beholding the things which were done, smote their breasts, and returned.</code> | | <code>Who is commanded by the Lord to set a watchman and declare what he sees?</code> | <code>The burden of the desert of the sea. As whirlwinds in the south pass through; [so] it cometh from the desert, from a terrible land. A grievous vision is declared unto me; the treacherous dealer dealeth treacherously, and the spoiler spoileth. Go up, O Elam: besiege, O Media; all the sighing thereof have I made to cease. Therefore are my loins filled with pain: pangs have taken hold upon me, as the pangs of a woman that travaileth: I was bowed down at the hearing [of it]; I was dismayed at the seeing [of it]. My heart panted, fearfulness affrighted me: the night of my pleasure hath he turned into fear unto me. Prepare the table, watch in the watchtower, eat, drink: arise, ye princes, [and] anoint the shield. For thus hath the Lord said unto me, Go, set a watchman, let him declare what he seeth. And he saw a chariot [with] a couple of horsemen, a chariot of asses, [and] a chariot of camels; and he hearkened diligently with much heed: And he cried, A lion: My lord, I stand continually upo...</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | validation_cosine_ndcg@10 | |:------:|:----:|:-------------:|:-------------------------:| | 0.0755 | 50 | - | 0.0982 | | 0.1511 | 100 | - | 0.1408 | | 0.2266 | 150 | - | 0.1546 | | 0.3021 | 200 | - | 0.1612 | | 0.3776 | 250 | - | 0.1655 | | 0.4532 | 300 | - | 0.1663 | | 0.5287 | 350 | - | 0.1710 | | 0.6042 | 400 | - | 0.1704 | | 0.6798 | 450 | - | 0.1713 | | 0.7553 | 500 | 2.378 | 0.1702 | | 0.8308 | 550 | - | 0.1727 | | 0.9063 | 600 | - | 0.1734 | | 0.9819 | 650 | - | 0.1741 | | 1.0 | 662 | - | 0.1745 | | 1.0574 | 700 | - | 0.1752 | | 1.1329 | 750 | - | 0.1761 | | 1.2085 | 800 | - | 0.1750 | | 1.2840 | 850 | - | 0.1719 | | 1.3595 | 900 | - | 0.1730 | | 1.4350 | 950 | - | 0.1760 | | 1.5106 | 1000 | 0.7402 | 0.1776 | | 1.5861 | 1050 | - | 0.1757 | | 1.6616 | 1100 | - | 0.1774 | | 1.7372 | 1150 | - | 0.1757 | | 1.8127 | 1200 | - | 0.1749 | | 1.8882 | 1250 | - | 0.1745 | | 1.9637 | 1300 | - | 0.1758 | | 2.0 | 1324 | - | 0.1776 | | 2.0393 | 1350 | - | 0.1772 | | 2.1148 | 1400 | - | 0.1751 | | 2.1903 | 1450 | - | 0.1757 | | 2.2659 | 1500 | 0.467 | 0.1742 | | 2.3414 | 1550 | - | 0.1748 | | 2.4169 | 1600 | - | 0.1738 | | 2.4924 | 1650 | - | 0.1749 | | 2.5680 | 1700 | - | 0.1772 | | 2.6435 | 1750 | - | 0.1772 | | 2.7190 | 1800 | - | 0.1772 | | 2.7946 | 1850 | - | 0.1774 | | 2.8701 | 1900 | - | 0.1770 | | 2.9456 | 1950 | - | 0.1757 | | 3.0 | 1986 | - | 0.1771 | | 3.0211 | 2000 | 0.2653 | 0.1762 | | 3.0967 | 2050 | - | 0.1745 | | 3.1722 | 2100 | - | 0.1748 | | 3.2477 | 2150 | - | 0.1749 | | 3.3233 | 2200 | - | 0.1766 | | 3.3988 | 2250 | - | 0.1746 | | 3.4743 | 2300 | - | 0.1749 | | 3.5498 | 2350 | - | 0.1766 | | 3.6254 | 2400 | - | 0.1752 | | 3.7009 | 2450 | - | 0.1749 | | 3.7764 | 2500 | 0.1809 | 0.1746 | | 3.8520 | 2550 | - | 0.1751 | | 3.9275 | 2600 | - | 0.1755 | | 4.0 | 2648 | - | 0.1744 | | 4.0030 | 2650 | - | 0.1747 | | 4.0785 | 2700 | - | 0.1747 | | 4.1541 | 2750 | - | 0.1766 | | 4.2296 | 2800 | - | 0.1761 | | 4.3051 | 2850 | - | 0.1745 | | 4.3807 | 2900 | - | 0.1748 | | 4.4562 | 2950 | - | 0.1753 | | 4.5317 | 3000 | 0.1368 | 0.1741 | | 4.6073 | 3050 | - | 0.1718 | | 4.6828 | 3100 | - | 0.1730 | | 4.7583 | 3150 | - | 0.1735 | | 4.8338 | 3200 | - | 0.1753 | | 4.9094 | 3250 | - | 0.1744 | | 4.9849 | 3300 | - | 0.1752 | | 5.0 | 3310 | - | 0.1758 | | 5.0604 | 3350 | - | 0.1771 | | 5.1360 | 3400 | - | 0.1758 | | 5.2115 | 3450 | - | 0.1741 | | 5.2870 | 3500 | 0.1178 | 0.1741 | | 5.3625 | 3550 | - | 0.1746 | | 5.4381 | 3600 | - | 0.1744 | | 5.5136 | 3650 | - | 0.1740 | | 5.5891 | 3700 | - | 0.1743 | | 5.6647 | 3750 | - | 0.1744 | | 5.7402 | 3800 | - | 0.1733 | | 5.8157 | 3850 | - | 0.1747 | | 5.8912 | 3900 | - | 0.1755 | | 5.9668 | 3950 | - | 0.1734 | | 6.0 | 3972 | - | 0.1740 | | 6.0423 | 4000 | 0.0878 | 0.1745 | | 6.1178 | 4050 | - | 0.1734 | | 6.1934 | 4100 | - | 0.1725 | | 6.2689 | 4150 | - | 0.1748 | | 6.3444 | 4200 | - | 0.1743 | | 6.4199 | 4250 | - | 0.1742 | | 6.4955 | 4300 | - | 0.1738 | | 6.5710 | 4350 | - | 0.1756 | | 6.6465 | 4400 | - | 0.1746 | | 6.7221 | 4450 | - | 0.1754 | | 6.7976 | 4500 | 0.0697 | 0.1756 | | 6.8731 | 4550 | - | 0.1755 | | 6.9486 | 4600 | - | 0.1755 | | 7.0 | 4634 | - | 0.1755 | | 7.0242 | 4650 | - | 0.1752 | | 7.0997 | 4700 | - | 0.1766 | | 7.1752 | 4750 | - | 0.1745 | | 7.2508 | 4800 | - | 0.1751 | | 7.3263 | 4850 | - | 0.1746 | | 7.4018 | 4900 | - | 0.1747 | | 7.4773 | 4950 | - | 0.1742 | | 7.5529 | 5000 | 0.0643 | 0.1743 | | 7.6284 | 5050 | - | 0.1736 | | 7.7039 | 5100 | - | 0.1739 | | 7.7795 | 5150 | - | 0.1737 | | 7.8550 | 5200 | - | 0.1736 | | 7.9305 | 5250 | - | 0.1744 | | 8.0 | 5296 | - | 0.1750 | | 8.0060 | 5300 | - | 0.1751 | | 8.0816 | 5350 | - | 0.1742 | | 8.1571 | 5400 | - | 0.1739 | | 8.2326 | 5450 | - | 0.1745 | | 8.3082 | 5500 | 0.0521 | 0.1745 | | 8.3837 | 5550 | - | 0.1746 | | 8.4592 | 5600 | - | 0.1743 | | 8.5347 | 5650 | - | 0.1744 | | 8.6103 | 5700 | - | 0.1750 | | 8.6858 | 5750 | - | 0.1749 | | 8.7613 | 5800 | - | 0.1748 | | 8.8369 | 5850 | - | 0.1747 | | 8.9124 | 5900 | - | 0.1747 | | 8.9879 | 5950 | - | 0.1746 | | 9.0 | 5958 | - | 0.1746 | | 9.0634 | 6000 | 0.044 | 0.1745 | | 9.1390 | 6050 | - | 0.1742 | | 9.2145 | 6100 | - | 0.1740 | | 9.2900 | 6150 | - | 0.1742 | | 9.3656 | 6200 | - | 0.1744 | | 9.4411 | 6250 | - | 0.1739 | | 9.5166 | 6300 | - | 0.1737 | | 9.5921 | 6350 | - | 0.1740 | | 9.6677 | 6400 | - | 0.1738 | | 9.7432 | 6450 | - | 0.1739 | | 9.8187 | 6500 | 0.043 | 0.1738 | | 9.8943 | 6550 | - | 0.1738 | | 9.9698 | 6600 | - | 0.1739 | | 10.0 | 6620 | - | 0.1739 | </details> ### Framework Versions - Python: 3.13.3 - Sentence Transformers: 4.1.0 - Transformers: 4.52.3 - PyTorch: 2.7.0 - Accelerate: 1.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
jonne90/Naiomi_Scott
jonne90
2025-05-29T22:21:03Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-29T21:10:53Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Naiomi_Scott --- # Naiomi_Scott <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Naiomi_Scott` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Naiomi_Scott", "lora_weights": "https://huggingface.co/jonne90/Naiomi_Scott/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jonne90/Naiomi_Scott', weight_name='lora.safetensors') image = pipeline('Naiomi_Scott').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 3750 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jonne90/Naiomi_Scott/discussions) to add images that show off what you’ve made with this LoRA.
phospho-app/gc1724-gr00t-square-bin-12dv8
phospho-app
2025-05-29T22:19:49Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-05-29T21:34:40Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [gc1724/square-bin](https://huggingface.co/datasets/gc1724/square-bin) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 49 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
devfed/orpheus-3b-0.1-ft-ro-lora
devfed
2025-05-29T22:19:14Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/orpheus-3b-0.1-ft", "base_model:finetune:unsloth/orpheus-3b-0.1-ft", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-29T15:08:30Z
--- base_model: unsloth/orpheus-3b-0.1-ft tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** devfed - **License:** apache-2.0 - **Finetuned from model :** unsloth/orpheus-3b-0.1-ft This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
panas1989/bloom-560m-8bit
panas1989
2025-05-29T22:18:19Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-29T22:18:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Antigma/Qwen3-1.7B-GGUF
Antigma
2025-05-29T22:14:33Z
43
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-1.7B", "base_model:quantized:Qwen/Qwen3-1.7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-29T02:25:48Z
--- base_model: Qwen/Qwen3-1.7B library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- *Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)* *Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)* *Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)* ## llama.cpp quantization Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization. Original model: https://huggingface.co/Qwen/Qwen3-1.7B Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | | -------- | ---------- | --------- | ----- | | [qwen3-1.7b-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-GGUF/blob/main/qwen3-1.7b-q4_k_m.gguf)|Q4_K_M|1.19 GB|False| |[qwen3-1.7b-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-GGUF/blob/main/qwen3-1.7b-q4_0.gguf)|Q4_0|1.15 GB|False| |[qwen3-1.7b-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-GGUF/blob/main/qwen3-1.7b-q4_k_s.gguf)|Q4_K_S|1.15 GB|False| ## Downloading using huggingface-cli <details> <summary>Click to view download instructions</summary> First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-1.7B-GGUF --include "qwen3-1.7b-q4_k_m.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-1.7B-GGUF --include "qwen3-1.7b-q4_k_m.gguf/*" --local-dir ./ ``` You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./) </details>
Antigma/Qwen3-14B-GGUF
Antigma
2025-05-29T22:12:54Z
86
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-14B", "base_model:quantized:Qwen/Qwen3-14B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-29T02:58:41Z
--- base_model: Qwen/Qwen3-14B library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- *Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)* *Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)* *Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)* ## llama.cpp quantization Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization. Original model: https://huggingface.co/Qwen/Qwen3-14B Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | | -------- | ---------- | --------- | ----- | | [qwen3-14b-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-14B-GGUF/blob/main/qwen3-14b-q4_k_m.gguf)|Q4_K_M|8.38 GB|False| |[qwen3-14b-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-14B-GGUF/blob/main/qwen3-14b-q4_0.gguf)|Q4_0|7.93 GB|False| |[qwen3-14b-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-14B-GGUF/blob/main/qwen3-14b-q4_k_s.gguf)|Q4_K_S|7.98 GB|False| ## Downloading using huggingface-cli <details> <summary>Click to view download instructions</summary> First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-14B-GGUF --include "qwen3-14b-q4_k_m.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-14B-GGUF --include "qwen3-14b-q4_k_m.gguf/*" --local-dir ./ ``` You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./) </details>
mohsin-riad/upscaler-ultra
mohsin-riad
2025-05-29T22:12:46Z
0
0
pytorch
[ "pytorch", "mohsin-riad", "image-processing", "super-resolution", "upscaling", "real-esrgan", "image-to-image", "en", "dataset:DIV2K", "dataset:Flickr2K", "license:apache-2.0", "region:us" ]
image-to-image
2025-05-29T20:08:28Z
--- language: en tags: - mohsin-riad - image-processing - super-resolution - upscaling - real-esrgan license: apache-2.0 base_model: xinntao/realesrgan-x4plus datasets: - DIV2K - Flickr2K library_name: pytorch pipeline_tag: image-to-image --- # Upscaler-Ultra ![](https://replicate.delivery/pbxt/N5xUyx5jJ9DOFRm1dQaKbPM3CBaovTL2V04xwCPBhsQmMORp/Screenshot%202025-05-30%20at%203.17.55%E2%80%AFAM.png) ## Model Description Upscaler-Ultra is a high-performance image upscaling model built upon RealESRGAN architecture. This model is designed to enhance image resolution while maintaining high quality and preserving fine details. The model specializes in upscaling low-resolution images to higher resolutions with minimal artifacts and maximum clarity, leveraging the proven effectiveness of Real-ESRGAN for practical image restoration tasks. ### Model Architecture This model is based on RealESRGAN (Real-Enhanced Super-Resolution Generative Adversarial Networks), which utilizes: - Enhanced ESRGAN architecture optimized for real-world image degradation - Adversarial training with improved discriminator networks - Perceptual loss functions for better visual quality - Specialized training techniques for handling complex real-world artifacts ## Intended Uses & Limitations ### Intended Uses - Image upscaling and enhancement - Photo restoration and quality improvement - Digital art enhancement - Low-resolution image improvement - Professional photography post-processing - Real-world image super-resolution tasks ### Limitations - Performance may vary depending on input image quality and degradation type - Very low-resolution inputs might not achieve optimal results - Processing time increases with input image size - May not preserve extremely fine details in heavily compressed images - Best suited for natural images rather than synthetic graphics ### Base Model Built upon [RealESRGAN](https://github.com/xinntao/Real-ESRGAN), specifically the RealESRGAN-x4plus model, with additional fine-tuning and optimizations. ### API Usage The model is available through Replicate API: ```python import replicate output = replicate.run( "mohsin-riad/upscaler-ultra", input={"image": "path_to_your_image.jpg"} ) ``` Replicate: [mohsin-riad/upscaler-ultra](https://replicate.com/mohsin-riad/upscaler-ultra) ## Citation If you use this model in your research, please cite: ```bibtex @misc{upscaler-ultra, author = {Mohsin Riad}, title = {Upscaler-Ultra: High-Quality Image Upscaling Model Based on RealESRGAN}, year = {2025}, publisher = {Hugging Face}, journal = {Hugging Face Hub}, howpublished = {\url{https://huggingface.co/mohsin-riad/upscaler-ultra}} } ``` Please also cite the original RealESRGAN work: ```bibtex @InProceedings{wang2021realesrgan, author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan}, title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data}, booktitle = {International Conference on Computer Vision Workshops (ICCVW)}, date = {2021} } ``` ## Additional Information For questions and feedback, please contact: - GitHub: [mohsin-riad](http://github.com/mohsin-riad) - Model Repository: [upscaler-ultra](http://github.com/mohsin-riad/upscaler-ultra) ### License This model is released under the Apache License 2.0. ### Acknowledgments - Special thanks to the RealESRGAN team for the foundational architecture - Thanks to the open-source community and all contributors who have helped in the development of this model - Built upon the excellent work of Xintao Wang et al. on Real-ESRGAN
Antigma/Qwen3-30B-A3B-GGUF
Antigma
2025-05-29T22:12:18Z
90
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-30B-A3B", "base_model:quantized:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-29T03:29:39Z
--- base_model: Qwen/Qwen3-30B-A3B library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- *Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)* *Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)* *Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)* ## llama.cpp quantization Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization. Original model: https://huggingface.co/Qwen/Qwen3-30B-A3B Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | | -------- | ---------- | --------- | ----- | | [qwen3-30b-a3b-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF/blob/main/qwen3-30b-a3b-q4_k_m.gguf)|Q4_K_M|17.28 GB|False| |[qwen3-30b-a3b-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF/blob/main/qwen3-30b-a3b-q4_0.gguf)|Q4_0|16.12 GB|False| |[qwen3-30b-a3b-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF/blob/main/qwen3-30b-a3b-q4_k_s.gguf)|Q4_K_S|16.26 GB|False| ## Downloading using huggingface-cli <details> <summary>Click to view download instructions</summary> First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF --include "qwen3-30b-a3b-q4_k_m.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-30B-A3B-GGUF --include "qwen3-30b-a3b-q4_k_m.gguf/*" --local-dir ./ ``` You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./) </details>
vertings6/9829da55-a463-4f27-9122-df711b65f5bc
vertings6
2025-05-29T22:11:55Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/SmolLM2-1.7B", "base_model:quantized:unsloth/SmolLM2-1.7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-29T21:33:02Z
--- base_model: unsloth/SmolLM2-1.7B library_name: transformers model_name: 9829da55-a463-4f27-9122-df711b65f5bc tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 9829da55-a463-4f27-9122-df711b65f5bc This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vertings6/9829da55-a463-4f27-9122-df711b65f5bc", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/ms4k44md) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sergioalves/0a76c4a1-fb81-420d-a583-3b7614bde764
sergioalves
2025-05-29T22:11:48Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:unsloth/SmolLM2-1.7B", "base_model:quantized:unsloth/SmolLM2-1.7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-29T21:33:13Z
--- base_model: unsloth/SmolLM2-1.7B library_name: transformers model_name: 0a76c4a1-fb81-420d-a583-3b7614bde764 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 0a76c4a1-fb81-420d-a583-3b7614bde764 This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sergioalves/0a76c4a1-fb81-420d-a583-3b7614bde764", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/gmv9qx73) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Antigma/Qwen3-0.6B-Base-GGUF
Antigma
2025-05-29T22:11:35Z
103
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-0.6B-Base", "base_model:quantized:Qwen/Qwen3-0.6B-Base", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-04-29T04:27:38Z
--- base_model: Qwen/Qwen3-0.6B-Base library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- *Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)* *Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)* *Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)* ## llama.cpp quantization Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization. Original model: https://huggingface.co/Qwen/Qwen3-0.6B-Base Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | | -------- | ---------- | --------- | ----- | | [qwen3-0.6b-base-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF/blob/main/qwen3-0.6b-base-q4_k_m.gguf)|Q4_K_M|0.37 GB|False| |[qwen3-0.6b-base-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF/blob/main/qwen3-0.6b-base-q4_0.gguf)|Q4_0|0.36 GB|False| |[qwen3-0.6b-base-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF/blob/main/qwen3-0.6b-base-q4_k_s.gguf)|Q4_K_S|0.36 GB|False| ## Downloading using huggingface-cli <details> <summary>Click to view download instructions</summary> First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF --include "qwen3-0.6b-base-q4_k_m.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download https://huggingface.co/Antigma/Qwen3-0.6B-Base-GGUF --include "qwen3-0.6b-base-q4_k_m.gguf/*" --local-dir ./ ``` You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./) </details>
ahmedhugging12/llama3b-psych-mcqqa-lora-4bit
ahmedhugging12
2025-05-29T22:09:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-29T22:09:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alexmkv01/news-junkie-modernBERT-Large-v1
alexmkv01
2025-05-29T22:09:19Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-29T22:04:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Antigma/Devstral-Small-2505-GGUF
Antigma
2025-05-29T22:09:16Z
292
1
vllm
[ "vllm", "gguf", "llama-cpp", "gguf-my-repo", "text2text-generation", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "base_model:unsloth/Devstral-Small-2505", "base_model:quantized:unsloth/Devstral-Small-2505", "license:apache-2.0", "region:us", "conversational" ]
text2text-generation
2025-05-22T22:34:51Z
--- base_model: unsloth/Devstral-Small-2505 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 pipeline_tag: text2text-generation tags: - llama-cpp - gguf-my-repo inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. --- *Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)* *Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)* *Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)* ## llama.cpp quantization Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5223">b5223</a> for quantization. Original model: https://huggingface.co/unsloth/Devstral-Small-2505 Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project ## Prompt format ``` <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | | -------- | ---------- | --------- | ----- | | [devstral-small-2505-q2_k.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q2_k.gguf)|Q2_K|8.28 GB|False| |[devstral-small-2505-q3_k_l.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q3_k_l.gguf)|Q3_K_L|11.55 GB|False| |[devstral-small-2505-q6_k.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q6_k.gguf)|Q6_K|18.02 GB|False| |[devstral-small-2505-q4_k_m.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q4_k_m.gguf)|Q4_K_M|13.35 GB|False| |[devstral-small-2505-q5_k_m.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q5_k_m.gguf)|Q5_K_M|15.61 GB|False| |[devstral-small-2505-q8_0.gguf](https://huggingface.co/Antigma/Devstral-Small-2505-GGUF/blob/main/devstral-small-2505-q8_0.gguf)|Q8_0|23.33 GB|False| ## Downloading using huggingface-cli <details> <summary>Click to view download instructions</summary> First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download https://huggingface.co/Antigma/Devstral-Small-2505-GGUF --include "devstral-small-2505-q2_k.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download https://huggingface.co/Antigma/Devstral-Small-2505-GGUF --include "devstral-small-2505-q2_k.gguf/*" --local-dir ./ ``` You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./) </details>
faarafa/fa
faarafa
2025-05-29T22:08:04Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-29T22:07:57Z
--- license: other license_name: wibu license_link: LICENSE ---
nudrick/tasia
nudrick
2025-05-29T22:07:47Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-29T21:54:11Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: tasia --- # Tasia <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `tasia` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "tasia", "lora_weights": "https://huggingface.co/nudrick/tasia/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('nudrick/tasia', weight_name='lora.safetensors') image = pipeline('tasia').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/nudrick/tasia/discussions) to add images that show off what you’ve made with this LoRA.
mradermacher/gemma-3-27b-it-abliterated-v2-GGUF
mradermacher
2025-05-29T22:06:15Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:mlabonne/gemma-3-27b-it-abliterated-v2", "base_model:quantized:mlabonne/gemma-3-27b-it-abliterated-v2", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T11:24:29Z
--- base_model: mlabonne/gemma-3-27b-it-abliterated-v2 language: - en library_name: transformers license: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-v2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q2_K.gguf) | Q2_K | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q5_K_S.gguf) | Q5_K_S | 18.9 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q5_K_M.gguf) | Q5_K_M | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q6_K.gguf) | Q6_K | 22.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/gemma-3-27b-it-abliterated-v2-GGUF/resolve/main/gemma-3-27b-it-abliterated-v2.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Hyeongdon/MNLP_M2_dpo_model
Hyeongdon
2025-05-29T22:03:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T20:54:17Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Xenova/whisper-tiny.en
Xenova
2025-05-29T22:01:41Z
45,248
12
transformers.js
[ "transformers.js", "onnx", "whisper", "automatic-speech-recognition", "base_model:openai/whisper-tiny.en", "base_model:quantized:openai/whisper-tiny.en", "region:us" ]
automatic-speech-recognition
2023-05-02T21:37:47Z
--- base_model: openai/whisper-tiny.en library_name: transformers.js --- # Whisper [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) with ONNX weights to be compatible with [Transformers.js](https://huggingface.co/docs/transformers.js). ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using: ```bash npm i @huggingface/transformers ``` **Example:** Transcribe English. ```js import { pipeline } from '@huggingface/transformers'; // Create speech recognition pipeline const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en'); // Transcribe audio from URL const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav'; const output = await transcriber(url); // { text: " And so my fellow Americans ask not what your country can do for you, ask what you can do for your country." } ``` **Example:** Transcribe English w/ timestamps. ```js import { pipeline } from '@huggingface/transformers'; // Create speech recognition pipeline const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en'); // Transcribe audio from URL with timestamps const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav'; const output = await transcriber(url, { return_timestamps: true }); // { // text: " And so my fellow Americans ask not what your country can do for you, ask what you can do for your country." // chunks: [ // { timestamp: [0, 8], text: " And so my fellow Americans ask not what your country can do for you" } // { timestamp: [8, 11], text: " ask what you can do for your country." } // ] // } ``` **Example:** Transcribe English w/ word-level timestamps. ```js import { pipeline } from '@huggingface/transformers'; // Create speech recognition pipeline const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en'); // Transcribe audio from URL with word-level timestamps const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav'; const output = await transcriber(url, { return_timestamps: 'word' }); // { // "text": " And so my fellow Americans ask not what your country can do for you ask what you can do for your country.", // "chunks": [ // { "text": " And", "timestamp": [0, 0.78] }, // { "text": " so", "timestamp": [0.78, 1.06] }, // { "text": " my", "timestamp": [1.06, 1.46] }, // ... // { "text": " for", "timestamp": [9.72, 9.92] }, // { "text": " your", "timestamp": [9.92, 10.22] }, // { "text": " country.", "timestamp": [10.22, 13.5] } // ] // } ``` --- Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_S-GGUF
Triangle104
2025-05-29T21:53:26Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "moe", "mixture of experts", "merge", "llama-3", "llama3", "llama-cpp", "gguf-my-repo", "base_model:DavidAU/L3-MOE-4X8B-Grand-Horror-25B", "base_model:quantized:DavidAU/L3-MOE-4X8B-Grand-Horror-25B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T21:32:54Z
--- library_name: transformers tags: - mergekit - moe - mixture of experts - merge - llama-3 - llama3 - llama-cpp - gguf-my-repo base_model: DavidAU/L3-MOE-4X8B-Grand-Horror-25B --- # Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_S-GGUF This model was converted to GGUF format from [`DavidAU/L3-MOE-4X8B-Grand-Horror-25B`](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) for more details on the model. --- It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine Dark/Horror models models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B). This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional. It excels at description, dialog, imagery, metaphors, and prose - and shows great variations in sentence / paragraph size, length, and composition. It is also not afraid, and will not pull its punches. And it has a sense of humor too. It can do horror just as easily as it can do romance. Most notably dialog is very "un-ai" like, combined with prose (short, and terse at times). (lots of different examples below, including 2, 3 and 4 experts and different genres) And it is fast: 34 t/s (2 experts) on a low end 16GB card, Q3KS. Double this speed for standard/mid-range video cards. Model can be used also for all genres (examples below showing this). This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5. It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct). It is for any writing, fiction or roleplay activity. It requires Llama3 template and/or "Command-R" template. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_S-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_S-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_S-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q5_K_S-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q5_k_s.gguf -c 2048 ```
Lysandrec/MNLP_M2_rag_model
Lysandrec
2025-05-29T21:52:21Z
28
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-22T09:03:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tamewild/4b_v2_merged_e16
tamewild
2025-05-29T21:51:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:47:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NickL77/first-experiment
NickL77
2025-05-29T21:48:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:45:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jruaechalar/cartaBajo3
jruaechalar
2025-05-29T21:48:22Z
0
0
diffusers
[ "diffusers", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-05-29T21:47:09Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tamewild/4b_v2_merged_e17
tamewild
2025-05-29T21:45:53Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:43:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HAissa/MNLP_M2_mcqa_model_test
HAissa
2025-05-29T21:44:27Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:41:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
limitedonly41/mistral-7b-v0.3_car_filter_300_vllm
limitedonly41
2025-05-29T21:38:58Z
0
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/mistral-7b-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:06:41Z
--- base_model: unsloth/mistral-7b-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** limitedonly41 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
benchaffe/ddpm-pokemon-gen-64
benchaffe
2025-05-29T21:37:53Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "arxiv:1910.09700", "diffusers:DDPMPipeline", "region:us" ]
null
2025-05-29T15:36:26Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AKAWIZ/llama-3-8b-calvinscale
AKAWIZ
2025-05-29T21:34:35Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-24T20:41:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
semran1/qwen4bupu1h9ho
semran1
2025-05-29T21:34:34Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:32:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrZeggers/gemma-3n-mobile
MrZeggers
2025-05-29T21:33:45Z
0
0
null
[ "region:us" ]
null
2025-05-29T21:31:36Z
# Gemma 3n Mobile - INT4 Quantized ## 📱 Modelo optimizado para dispositivos móviles Este es el modelo **Gemma 3n** cuantizado a INT4 para uso en aplicaciones móviles React Native. ### 🎯 Características - **Tamaño:** 4.1GB (cuantizado INT4) - **Parámetros:** 3 mil millones - **Optimización:** ARM64 / dispositivos móviles - **Formato:** .task (TensorFlow Lite) - **Uso:** Inferencia local offline ### 📦 Uso en React Native ```typescript import * as FileSystem from "expo-file-system"; const MODEL_URL = "https://huggingface.co/MrZeggers/gemma-3n-mobile/resolve/main/gemma-3n-E4B-it-int4.task"; const downloadModel = async () => { const documentsDir = FileSystem.documentDirectory; const modelPath = `${documentsDir}gemma-3n-E4B-it-int4.task`; const downloadResult = await FileSystem.downloadAsync(MODEL_URL, modelPath); if (downloadResult.status === 200) { console.log("✅ Modelo descargado exitosamente"); return modelPath; } }; ``` ### 🚀 Ventajas - **Completamente offline** después de la descarga inicial - **Privacidad total** - no se envían datos a servidores - **Optimizado para móviles** - cuantización INT4 - **Compatible con Expo** - funciona con EAS Build ### ⚡ Rendimiento - **Memoria:** ~2.1GB en runtime - **Latencia:** ~50ms por respuesta - **Compatibilidad:** iOS 13+, Android API 21+ ### 📋 Licencia Este modelo está basado en Gemma de Google. Consulta la licencia original de Gemma para términos de uso. ### 🛠️ Implementación Para usar este modelo en tu app React Native: 1. Descarga dinámicamente usando `expo-file-system` 2. Carga con `react-native-fast-tflite` 3. Ejecuta inferencia local Ver documentación completa en: [GitHub Repository](https://github.com/tu-usuario/gemma-react-native) --- **Nota:** Este modelo es para uso educativo y de desarrollo. Para uso en producción, asegúrate de cumplir con todas las licencias aplicables.
shariar076/Llama-3.1-8B-DPO-25R75L
shariar076
2025-05-29T21:32:52Z
0
0
null
[ "safetensors", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "conversational", "en", "de", "fr", "it", "pt", "hi", "es", "th", "arxiv:2204.05149", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "region:us" ]
text-generation
2025-05-29T21:29:00Z
--- language: - en - de - fr - it - pt - hi - es - th license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\ \ create derivative works of, and make modifications to the Llama Materials.\nb.\ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\ \ (or any derivative works thereof), or a product or service (including another\ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\ \ on a related website, user interface, blogpost, about page, or product documentation.\ \ If you use the Llama Materials or any outputs or results of the Llama Materials\ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\ \ or made available, you shall also include “Llama” at the beginning of any such\ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\ \ from a Licensee as part of an integrated end user product, then Section 2 of\ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\ \ Llama Materials that you distribute the following attribution notice within a\ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\ \ and regulations (including trade compliance laws and regulations) and adhere to\ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\ \ users of the products or services made available by or for Licensee, or Licensee’s\ \ affiliates, is greater than 700 million monthly active users in the preceding\ \ calendar month, you must request a license from Meta, which Meta may grant to\ \ you in its sole discretion, and you are not authorized to exercise any of the\ \ rights under this Agreement unless or until Meta otherwise expressly grants you\ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\ \ trademark licenses are granted under this Agreement, and in connection with the\ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\ \ associated with the other or any of its affiliates, except as required for reasonable\ \ and customary use in describing and redistributing the Llama Materials or as set\ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\ \ by or for Meta, with respect to any derivative works and modifications of the\ \ Llama Materials that are made by you, as between you and Meta, you are and will\ \ be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement of\ \ intellectual property or other rights owned or licensable by you, then any licenses\ \ granted to you under this Agreement shall terminate as of the date such litigation\ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\ \ and against any claim by any third party arising out of or related to your use\ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\ \ this Agreement will commence upon your acceptance of this Agreement or access\ \ to the Llama Materials and will continue in full force and effect until terminated\ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 5.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\ \ end users any known dangers of your AI system\nPlease report any violation of\ \ this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. **Model developer**: Meta **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Input modalities</strong> </td> <td><strong>Output modalities</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="3" >Llama 3.1 (text only) </td> <td rowspan="3" >A new mix of publicly available online data. </td> <td>8B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> <td rowspan="3" >15T+ </td> <td rowspan="3" >December 2023 </td> </tr> <tr> <td>70B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> <tr> <td>405B </td> <td>Multilingual Text </td> <td>Multilingual Text and code </td> <td>128k </td> <td>Yes </td> </tr> </table> **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** July 23, 2024. **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**. **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner. ## How to use This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) ### Tool use with transformers LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/). Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers. Here is a quick example showing a single simple tool: ```python # First, define a tool def get_current_temperature(location: str) -> float: """ Get the current temperature at a location. Args: location: The location to get the temperature for, in the format "City, Country" Returns: The current temperature at the specified location in the specified units, as a float. """ return 22. # A real function should probably actually get the temperature! # Next, create a chat and apply the chat template messages = [ {"role": "system", "content": "You are a bot that responds to weather queries."}, {"role": "user", "content": "Hey, what's the temperature in Paris right now?"} ] inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True) ``` You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so: ```python tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}} messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]}) ``` and then call the tool and append the result, with the `tool` role, like so: ```python messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"}) ``` After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling). ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct ``` ## Hardware and Software **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure. **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq. <table> <tr> <td> </td> <td><strong>Training Time (GPU hours)</strong> </td> <td><strong>Training Power Consumption (W)</strong> </td> <td><strong>Training Location-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> <td><strong>Training Market-Based Greenhouse Gas Emissions</strong> <p> <strong>(tons CO2eq)</strong> </td> </tr> <tr> <td>Llama 3.1 8B </td> <td>1.46M </td> <td>700 </td> <td>420 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 70B </td> <td>7.0M </td> <td>700 </td> <td>2,040 </td> <td>0 </td> </tr> <tr> <td>Llama 3.1 405B </td> <td>30.84M </td> <td>700 </td> <td>8,930 </td> <td>0 </td> </tr> <tr> <td>Total </td> <td>39.3M <td> <ul> </ul> </td> <td>11,390 </td> <td>0 </td> </tr> </table> The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples. **Data Freshness:** The pretraining data has a cutoff of December 2023. ## Benchmark scores In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="7" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>66.7 </td> <td>66.7 </td> <td>79.5 </td> <td>79.3 </td> <td>85.2 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>macro_avg/acc_char </td> <td>36.2 </td> <td>37.1 </td> <td>55.0 </td> <td>53.8 </td> <td>61.6 </td> </tr> <tr> <td>AGIEval English </td> <td>3-5 </td> <td>average/acc_char </td> <td>47.1 </td> <td>47.8 </td> <td>63.0 </td> <td>64.6 </td> <td>71.6 </td> </tr> <tr> <td>CommonSenseQA </td> <td>7 </td> <td>acc_char </td> <td>72.6 </td> <td>75.0 </td> <td>83.8 </td> <td>84.1 </td> <td>85.8 </td> </tr> <tr> <td>Winogrande </td> <td>5 </td> <td>acc_char </td> <td>- </td> <td>60.5 </td> <td>- </td> <td>83.3 </td> <td>86.7 </td> </tr> <tr> <td>BIG-Bench Hard (CoT) </td> <td>3 </td> <td>average/em </td> <td>61.1 </td> <td>64.2 </td> <td>81.3 </td> <td>81.6 </td> <td>85.9 </td> </tr> <tr> <td>ARC-Challenge </td> <td>25 </td> <td>acc_char </td> <td>79.4 </td> <td>79.7 </td> <td>93.1 </td> <td>92.9 </td> <td>96.1 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki </td> <td>5 </td> <td>em </td> <td>78.5 </td> <td>77.6 </td> <td>89.7 </td> <td>89.8 </td> <td>91.8 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD </td> <td>1 </td> <td>em </td> <td>76.4 </td> <td>77.0 </td> <td>85.6 </td> <td>81.8 </td> <td>89.3 </td> </tr> <tr> <td>QuAC (F1) </td> <td>1 </td> <td>f1 </td> <td>44.4 </td> <td>44.9 </td> <td>51.1 </td> <td>51.1 </td> <td>53.6 </td> </tr> <tr> <td>BoolQ </td> <td>0 </td> <td>acc_char </td> <td>75.7 </td> <td>75.0 </td> <td>79.0 </td> <td>79.4 </td> <td>80.0 </td> </tr> <tr> <td>DROP (F1) </td> <td>3 </td> <td>f1 </td> <td>58.4 </td> <td>59.5 </td> <td>79.7 </td> <td>79.6 </td> <td>84.8 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong># Shots</strong> </td> <td><strong>Metric</strong> </td> <td><strong>Llama 3 8B Instruct</strong> </td> <td><strong>Llama 3.1 8B Instruct</strong> </td> <td><strong>Llama 3 70B Instruct</strong> </td> <td><strong>Llama 3.1 70B Instruct</strong> </td> <td><strong>Llama 3.1 405B Instruct</strong> </td> </tr> <tr> <td rowspan="4" >General </td> <td>MMLU </td> <td>5 </td> <td>macro_avg/acc </td> <td>68.5 </td> <td>69.4 </td> <td>82.0 </td> <td>83.6 </td> <td>87.3 </td> </tr> <tr> <td>MMLU (CoT) </td> <td>0 </td> <td>macro_avg/acc </td> <td>65.3 </td> <td>73.0 </td> <td>80.9 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>MMLU-Pro (CoT) </td> <td>5 </td> <td>micro_avg/acc_char </td> <td>45.5 </td> <td>48.3 </td> <td>63.4 </td> <td>66.4 </td> <td>73.3 </td> </tr> <tr> <td>IFEval </td> <td> </td> <td> </td> <td>76.8 </td> <td>80.4 </td> <td>82.9 </td> <td>87.5 </td> <td>88.6 </td> </tr> <tr> <td rowspan="2" >Reasoning </td> <td>ARC-C </td> <td>0 </td> <td>acc </td> <td>82.4 </td> <td>83.4 </td> <td>94.4 </td> <td>94.8 </td> <td>96.9 </td> </tr> <tr> <td>GPQA </td> <td>0 </td> <td>em </td> <td>34.6 </td> <td>30.4 </td> <td>39.5 </td> <td>46.7 </td> <td>50.7 </td> </tr> <tr> <td rowspan="4" >Code </td> <td>HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>60.4 </td> <td>72.6 </td> <td>81.7 </td> <td>80.5 </td> <td>89.0 </td> </tr> <tr> <td>MBPP ++ base version </td> <td>0 </td> <td>pass@1 </td> <td>70.6 </td> <td>72.8 </td> <td>82.5 </td> <td>86.0 </td> <td>88.6 </td> </tr> <tr> <td>Multipl-E HumanEval </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>50.8 </td> <td>- </td> <td>65.5 </td> <td>75.2 </td> </tr> <tr> <td>Multipl-E MBPP </td> <td>0 </td> <td>pass@1 </td> <td>- </td> <td>52.4 </td> <td>- </td> <td>62.0 </td> <td>65.7 </td> </tr> <tr> <td rowspan="2" >Math </td> <td>GSM-8K (CoT) </td> <td>8 </td> <td>em_maj1@1 </td> <td>80.6 </td> <td>84.5 </td> <td>93.0 </td> <td>95.1 </td> <td>96.8 </td> </tr> <tr> <td>MATH (CoT) </td> <td>0 </td> <td>final_em </td> <td>29.1 </td> <td>51.9 </td> <td>51.0 </td> <td>68.0 </td> <td>73.8 </td> </tr> <tr> <td rowspan="4" >Tool Use </td> <td>API-Bank </td> <td>0 </td> <td>acc </td> <td>48.3 </td> <td>82.6 </td> <td>85.1 </td> <td>90.0 </td> <td>92.0 </td> </tr> <tr> <td>BFCL </td> <td>0 </td> <td>acc </td> <td>60.3 </td> <td>76.1 </td> <td>83.0 </td> <td>84.8 </td> <td>88.5 </td> </tr> <tr> <td>Gorilla Benchmark API Bench </td> <td>0 </td> <td>acc </td> <td>1.7 </td> <td>8.2 </td> <td>14.7 </td> <td>29.7 </td> <td>35.3 </td> </tr> <tr> <td>Nexus (0-shot) </td> <td>0 </td> <td>macro_avg/acc </td> <td>18.1 </td> <td>38.5 </td> <td>47.8 </td> <td>56.7 </td> <td>58.7 </td> </tr> <tr> <td>Multilingual </td> <td>Multilingual MGSM (CoT) </td> <td>0 </td> <td>em </td> <td>- </td> <td>68.9 </td> <td>- </td> <td>86.9 </td> <td>91.6 </td> </tr> </table> #### Multilingual benchmarks <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Language</strong> </td> <td><strong>Llama 3.1 8B</strong> </td> <td><strong>Llama 3.1 70B</strong> </td> <td><strong>Llama 3.1 405B</strong> </td> </tr> <tr> <td rowspan="9" ><strong>General</strong> </td> <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong> </td> <td>Portuguese </td> <td>62.12 </td> <td>80.13 </td> <td>84.95 </td> </tr> <tr> <td>Spanish </td> <td>62.45 </td> <td>80.05 </td> <td>85.08 </td> </tr> <tr> <td>Italian </td> <td>61.63 </td> <td>80.4 </td> <td>85.04 </td> </tr> <tr> <td>German </td> <td>60.59 </td> <td>79.27 </td> <td>84.36 </td> </tr> <tr> <td>French </td> <td>62.34 </td> <td>79.82 </td> <td>84.66 </td> </tr> <tr> <td>Hindi </td> <td>50.88 </td> <td>74.52 </td> <td>80.31 </td> </tr> <tr> <td>Thai </td> <td>50.32 </td> <td>72.95 </td> <td>78.21 </td> </tr> </table> ## Responsibility & Safety As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. ### Responsible deployment Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more. #### Llama 3.1 instruct Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals and Tone** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. #### Llama 3.1 systems **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. #### New capabilities Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases. **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards. **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization. **Red teaming** For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical and other risks We specifically focused our efforts on mitigating the following critical risk areas: **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. **2. Child Safety** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. **3. Cyber attack enablement** Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed. Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
Ualasse6931/Ia
Ualasse6931
2025-05-29T21:31:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-29T21:31:07Z
--- license: apache-2.0 ---
morturr/Mistral-7B-v0.1-amazon-2025-05-29
morturr
2025-05-29T21:29:27Z
2
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-05-28T22:52:25Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - trl - sft - generated_from_trainer model-index: - name: Mistral-7B-v0.1-amazon-2025-05-29 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-amazon-2025-05-29 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
Heromanmask/Moni
Heromanmask
2025-05-29T21:27:49Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-29T21:27:40Z
--- license: apache-2.0 ---
RobertoNeglia/pepe_generator_sd2base_reduced
RobertoNeglia
2025-05-29T21:27:14Z
0
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:stabilityai/stable-diffusion-2-base", "base_model:adapter:stabilityai/stable-diffusion-2-base", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-05-29T20:57:45Z
--- base_model: stabilityai/stable-diffusion-2-base library_name: diffusers license: creativeml-openrail-m inference: true tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - RobertoNeglia/pepe_generator_sd2base_reduced These are LoRA adaption weights for stabilityai/stable-diffusion-2-base. The weights were fine-tuned on the RobertoNeglia/pepe_dataset_ultra_reduced dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
abhikapoor909/vitmanu1b3-16q
abhikapoor909
2025-05-29T21:27:09Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T21:24:36Z
--- base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** abhikapoor909 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q4_K_M-GGUF
Triangle104
2025-05-29T21:25:18Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "moe", "mixture of experts", "merge", "llama-3", "llama3", "llama-cpp", "gguf-my-repo", "base_model:DavidAU/L3-MOE-4X8B-Grand-Horror-25B", "base_model:quantized:DavidAU/L3-MOE-4X8B-Grand-Horror-25B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T21:21:27Z
--- library_name: transformers tags: - mergekit - moe - mixture of experts - merge - llama-3 - llama3 - llama-cpp - gguf-my-repo base_model: DavidAU/L3-MOE-4X8B-Grand-Horror-25B --- # Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q4_K_M-GGUF This model was converted to GGUF format from [`DavidAU/L3-MOE-4X8B-Grand-Horror-25B`](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/L3-MOE-4X8B-Grand-Horror-25B) for more details on the model. --- It is a LLama3 model, max context of 8192 (or 32k+ with rope) using mixture of experts to combine Dark/Horror models models of 8B each into one massive powerhouse at 25B parameters (equal to 32B - 4 X 8 B). This model's instruction following, and output generation for creative writing, prose, fiction and role play are exceptional. It excels at description, dialog, imagery, metaphors, and prose - and shows great variations in sentence / paragraph size, length, and composition. It is also not afraid, and will not pull its punches. And it has a sense of humor too. It can do horror just as easily as it can do romance. Most notably dialog is very "un-ai" like, combined with prose (short, and terse at times). (lots of different examples below, including 2, 3 and 4 experts and different genres) And it is fast: 34 t/s (2 experts) on a low end 16GB card, Q3KS. Double this speed for standard/mid-range video cards. Model can be used also for all genres (examples below showing this). This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5. It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct). It is for any writing, fiction or roleplay activity. It requires Llama3 template and/or "Command-R" template. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q4_K_M-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q4_K_M-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q4_K_M-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/L3-MOE-4X8B-Grand-Horror-25B-Q4_K_M-GGUF --hf-file l3-moe-4x8b-grand-horror-25b-q4_k_m.gguf -c 2048 ```
Tarun-ak/s1-20250529_205341
Tarun-ak
2025-05-29T21:22:55Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T20:54:08Z
--- base_model: Qwen/Qwen2.5-14B-Instruct library_name: transformers model_name: s1-20250529_205341 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for s1-20250529_205341 This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Tarun-ak/s1-20250529_205341", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.1 - Pytorch: 2.5.1+cu121 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
espnet/owsm_dac_v2_16k
espnet
2025-05-29T21:13:06Z
0
0
espnet
[ "espnet", "audio", "codec", "multilingual", "dataset:amuse", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2025-05-29T21:04:53Z
--- tags: - espnet - audio - codec language: multilingual datasets: - amuse license: cc-by-4.0 --- ## ESPnet2 Codec model ### `espnet/owsm_dac_v2_16k` This model was trained by ftshijt using amuse recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 280bfedf2c9a19038e79d3402472bde30397a02c pip install -e . cd egs2/amuse/codec1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/owsm_dac_v2_16k ``` ## Codec config <details><summary>expand</summary> ``` config: conf/train_dac_large_v2.yaml print_config: false log_level: INFO drop_last_iter: false dry_run: false iterator_type: chunk valid_iterator_type: null output_dir: exp/codec_train_dac_large_v2_raw_fs16000 ngpu: 1 seed: 777 num_workers: 1 num_att_plot: 0 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 45173 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false use_deepspeed: false deepspeed_config: null cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: false use_tf32: false collect_stats: false write_collected_feats: false max_epoch: 360 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - mel_loss - min - - train - mel_loss - min - - train - total_count - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: -1 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: 50 use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false use_adapter: false adapter: lora save_strategy: all adapter_conf: {} pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 5000 batch_size: 64 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null category_sample_size: 10 train_shape_file: - exp/codec_stats_raw/train/audio_shape valid_shape_file: - exp/codec_stats_raw/valid/audio_shape batch_type: unsorted valid_batch_type: null fold_length: - 256000 sort_in_batch: descending shuffle_within_batch: false sort_batch: descending multiple_iterator: false chunk_length: 32000 chunk_shift_ratio: 0.5 num_cache_chunks: 256 chunk_excluded_key_prefixes: [] chunk_default_fs: null chunk_max_abs_length: null chunk_discard_short_samples: true train_data_path_and_name_and_type: - - dump/raw/owsm_all/wav.scp - audio - kaldi_ark valid_data_path_and_name_and_type: - - dump/raw/dev-small/wav.scp - audio - kaldi_ark multi_task_dataset: false allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 allow_multi_rates: false valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adamw optim_conf: lr: 0.0002 betas: - 0.5 - 0.9 eps: 1.0e-09 weight_decay: 0.0 scheduler: exponentiallr scheduler_conf: gamma: 0.999875 optim2: adamw optim2_conf: lr: 0.0002 betas: - 0.5 - 0.9 eps: 1.0e-09 weight_decay: 0.0 scheduler2: exponentiallr scheduler2_conf: gamma: 0.999875 generator_first: true skip_discriminator_prob: 0.0 model_conf: {} use_preprocessor: true codec: dac codec_conf: sampling_rate: 16000 generator_params: hidden_dim: 512 codebook_dim: 512 encdec_channels: 1 encdec_n_filters: 32 encdec_n_residual_layers: 3 encdec_ratios: - 8 - 5 - 4 - 2 encdec_activation: Snake encdec_norm: weight_norm encdec_kernel_size: 7 encdec_residual_kernel_size: 7 encdec_last_kernel_size: 7 encdec_dilation_base: 2 encdec_causal: false encdec_pad_mode: reflect encdec_true_skip: false encdec_compress: 2 encdec_lstm: 2 decoder_trim_right_ratio: 1.0 decoder_final_activation: null decoder_final_activation_params: null quantizer_n_q: 8 quantizer_bins: 1024 quantizer_decay: 0.99 quantizer_kmeans_init: true quantizer_kmeans_iters: 50 quantizer_threshold_ema_dead_code: 2 quantizer_target_bandwidth: - 0.5 - 1 - 2 - 4 quantizer_dropout: true sample_rate: 16000 discriminator_params: msmpmb_discriminator_params: rates: [] sample_rate: 16000 fft_sizes: - 1024 - 512 - 256 - 128 periods: - 2 - 3 - 5 - 7 - 11 period_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 5 - 3 channels: 32 downsample_scales: - 3 - 3 - 3 - 3 - 1 max_downsample_channels: 1024 bias: true nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false band_discriminator_params: hop_factor: 0.25 sample_rate: 16000 bands: - - 0.0 - 0.1 - - 0.1 - 0.25 - - 0.25 - 0.5 - - 0.5 - 0.75 - - 0.75 - 1.0 channel: 32 generator_adv_loss_params: average_by_discriminators: false loss_type: mse discriminator_adv_loss_params: average_by_discriminators: false loss_type: mse use_feat_match_loss: true feat_match_loss_params: average_by_discriminators: false average_by_layers: false include_final_outputs: true use_mel_loss: true mel_loss_params: range_start: 6 range_end: 11 window: hann n_mels: 80 fmin: 0 fmax: null log_base: null fs: 16000 lambda_quantization: 0.25 lambda_commit: 1.0 lambda_reconstruct: 1.0 lambda_adv: 1.0 lambda_mel: 45.0 lambda_feat_match: 2.0 cache_generator_outputs: true required: - output_dir version: '202402' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
espnet/owsm_pure_codec_v1.3_16k
espnet
2025-05-29T21:11:06Z
0
0
espnet
[ "espnet", "audio", "codec", "multilingual", "dataset:amuse", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2025-05-29T21:06:26Z
--- tags: - espnet - audio - codec language: multilingual datasets: - amuse license: cc-by-4.0 --- ## ESPnet2 Codec model ### `espnet/owsm_pure_codec_v1.3_16k` This model was trained by ftshijt using amuse recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 280bfedf2c9a19038e79d3402472bde30397a02c pip install -e . cd egs2/amuse/codec1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/owsm_pure_codec_v1.3_16k ``` ## Codec config <details><summary>expand</summary> ``` config: conf/train_sedac_large_v4.3.yaml print_config: false log_level: INFO drop_last_iter: false dry_run: false iterator_type: chunk valid_iterator_type: null output_dir: exp/codec_train_sedac_large_v4.3_raw_fs16000 ngpu: 1 seed: 777 num_workers: 1 num_att_plot: 0 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: true sharded_ddp: false use_deepspeed: false deepspeed_config: null cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: false use_tf32: false collect_stats: false write_collected_feats: false max_epoch: 360 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - mel_loss - min - - train - mel_loss - min - - train - total_count - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: 100 use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false use_adapter: false adapter: lora save_strategy: all adapter_conf: {} pretrain_path: null init_param: - /work/nvme/bbjs/shi3/codec/espnet/egs2/amuse/codec1/exp/codec_train_sedac_large_v4-0_raw_fs16000/latest.pth ignore_init_mismatch: true freeze_param: - codec.generator.encoder num_iters_per_epoch: 5000 batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null category_sample_size: 10 train_shape_file: - exp/codec_stats_raw/train/audio_shape valid_shape_file: - exp/codec_stats_raw/valid/audio_shape batch_type: unsorted valid_batch_type: null fold_length: - 256000 sort_in_batch: descending shuffle_within_batch: false sort_batch: descending multiple_iterator: false chunk_length: 32000 chunk_shift_ratio: 0.5 num_cache_chunks: 256 chunk_excluded_key_prefixes: [] chunk_default_fs: null chunk_max_abs_length: null chunk_discard_short_samples: true train_data_path_and_name_and_type: - - dump/raw/owsm_all/wav.scp - audio - kaldi_ark valid_data_path_and_name_and_type: - - dump/raw/dev-small/wav.scp - audio - kaldi_ark multi_task_dataset: false allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 allow_multi_rates: false valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adamw optim_conf: lr: 0.0002 betas: - 0.5 - 0.9 eps: 1.0e-09 weight_decay: 0.0 scheduler: exponentiallr scheduler_conf: gamma: 0.999875 optim2: adamw optim2_conf: lr: 0.0002 betas: - 0.5 - 0.9 eps: 1.0e-09 weight_decay: 0.0 scheduler2: exponentiallr scheduler2_conf: gamma: 0.999875 generator_first: true skip_discriminator_prob: 0.0 model_conf: {} use_preprocessor: true codec: se_dac2 codec_conf: sampling_rate: 16000 generator_params: hidden_dim: 512 codebook_dim: 512 se_model_source: espnet se_model_tag: wyz/vctk_dns2020_whamr_bsrnn_medium_noncausal enhanced_n_streams: 1 encdec_channels: 1 encdec_n_filters: 32 encdec_n_residual_layers: 3 encdec_ratios: - 8 - 5 - 4 - 2 encdec_activation: Snake encdec_norm: weight_norm encdec_kernel_size: 7 encdec_residual_kernel_size: 7 encdec_last_kernel_size: 7 encdec_dilation_base: 2 encdec_causal: false encdec_pad_mode: reflect encdec_true_skip: false encdec_compress: 2 encdec_lstm: 2 decoder_trim_right_ratio: 1.0 decoder_final_activation: null decoder_final_activation_params: null quantizer_n_q: 8 quantizer_bins: 1024 quantizer_decay: 0.99 quantizer_kmeans_init: true quantizer_kmeans_iters: 50 quantizer_threshold_ema_dead_code: 2 quantizer_target_bandwidth: - 1 - 2 - 4 quantizer_dropout: true sample_rate: 16000 inference_only: false discriminator_params: msmpmb_discriminator_params: rates: [] sample_rate: 16000 fft_sizes: - 1024 - 512 - 256 - 128 periods: - 2 - 3 - 5 - 7 - 11 period_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 5 - 3 channels: 32 downsample_scales: - 3 - 3 - 3 - 3 - 1 max_downsample_channels: 1024 bias: true nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false band_discriminator_params: hop_factor: 0.25 sample_rate: 16000 bands: - - 0.0 - 0.1 - - 0.1 - 0.25 - - 0.25 - 0.5 - - 0.5 - 0.75 - - 0.75 - 1.0 channel: 32 generator_adv_loss_params: average_by_discriminators: false loss_type: mse discriminator_adv_loss_params: average_by_discriminators: false loss_type: mse use_feat_match_loss: true feat_match_loss_params: average_by_discriminators: false average_by_layers: false include_final_outputs: true use_mel_loss: true mel_loss_params: range_start: 6 range_end: 11 window: hann n_mels: 80 fmin: 0 fmax: null log_base: null fs: 16000 skip_quantizer_updates: 0 activate_enh: 0 lambda_quantization: 0.25 lambda_commit: 1.0 lambda_reconstruct: 1.0 lambda_adv: 1.0 lambda_mel: 45.0 lambda_feat_match: 2.0 enhanced_prob: 0.5 cache_generator_outputs: true required: - output_dir version: '202402' distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Triangle104/DeepSeek-R1-0528-Qwen3-8B-Q8_0-GGUF
Triangle104
2025-05-29T21:10:25Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T20:55:43Z
--- license: mit library_name: transformers base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B tags: - llama-cpp - gguf-my-repo --- # Triangle104/DeepSeek-R1-0528-Qwen3-8B-Q8_0-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-R1-0528-Qwen3-8B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) for more details on the model. --- The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro. Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question. Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Q8_0-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Q8_0-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Q8_0-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/DeepSeek-R1-0528-Qwen3-8B-Q8_0-GGUF --hf-file deepseek-r1-0528-qwen3-8b-q8_0.gguf -c 2048 ```
GGNorbert/efficientnetv2_m-s2-v0.2.0-Nonclipped
GGNorbert
2025-05-29T21:09:48Z
0
0
configilm
[ "configilm", "safetensors", "efficientnetv2_m", "BigEarthNet v2.0", "Remote Sensing", "Classification", "image-classification", "Multispectral", "arxiv:2407.03653", "license:mit", "region:us" ]
image-classification
2025-05-29T21:08:17Z
--- thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" tags: - efficientnetv2_m - BigEarthNet v2.0 - Remote Sensing - Classification - image-classification - Multispectral library_name: configilm license: mit widget: - src: example.png example_title: Example output: - label: Agro-forestry areas score: 0.000000 - label: Arable land score: 0.000000 - label: Beaches, dunes, sands score: 0.000000 - label: Broad-leaved forest score: 0.000000 - label: Coastal wetlands score: 0.000000 --- [TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/) :---:|:---:|:---:|:---:|:---: <a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo"> # Efficientnetv2_m pretrained on BigEarthNet v2.0 using Sentinel-2 bands <!-- Optional images --> <!-- [Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) :---:|:---: <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/> --> This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-2 bands. It was trained using the following parameters: - Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average precision macro) - Batch size: 512 - Learning rate: 0.001 - Dropout rate: 0.15 - Drop Path rate: 0.15 - Learning rate scheduler: LinearWarmupCosineAnnealing for 2000 warmup steps - Optimizer: AdamW - Seed: 42 The weights published in this model card were obtained after 32 training epochs. For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts. ![[BigEarthNet](http://bigearth.net/)](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg) The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results: | Metric | Macro | Micro | |:------------------|------------------:|------------------:| | Average Precision | 0.676558 | 0.753525 | | F1 Score | 0.599082 | 0.661381 | | Precision | 0.711385 | 0.734382 | # Example | A Sentinel-2 image (true color representation) | |:---------------------------------------------------:| | ![[BigEarthNet](http://bigearth.net/)](example.png) | | Class labels | Predicted scores | |:--------------------------------------------------------------------------|--------------------------------------------------------------------------:| | <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000000 <br> 0.000000 <br> ... <br> 0.000000 </p> | To use the model, download the codes that define the model architecture from the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code. ```python from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder") ``` e.g. ```python from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier model = BigEarthNetv2_0_ImageClassifier.from_pretrained( "BIFOLD-BigEarthNetv2-0/efficientnetv2_m-s2-v0.1.1") ``` If you use this model in your research or the provided code, please cite the following papers: ```bibtex @article{clasen2024refinedbigearthnet, title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis}, author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker}, year={2024}, eprint={2407.03653}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2407.03653}, } ``` ```bibtex @article{hackel2024configilm, title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering}, author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m}, journal={SoftwareX}, volume={26}, pages={101731}, year={2024}, publisher={Elsevier} } ```
kosinebolisa/igbotts
kosinebolisa
2025-05-29T21:08:28Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2025-05-09T18:19:37Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: igbotts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # igbotts This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5972 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4855 | 250.0 | 1000 | 0.5836 | | 0.4515 | 500.0 | 2000 | 0.5947 | | 0.4349 | 750.0 | 3000 | 0.5947 | | 0.4327 | 1000.0 | 4000 | 0.5972 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
espnet/owsm_pure_codec_v1.1_16k
espnet
2025-05-29T21:08:22Z
0
0
espnet
[ "espnet", "audio", "codec", "multilingual", "dataset:amuse", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
null
2025-05-29T21:05:45Z
--- tags: - espnet - audio - codec language: multilingual datasets: - amuse license: cc-by-4.0 --- ## ESPnet2 Codec model ### `espnet/owsm_pure_codec_v1.1_16k` This model was trained by ftshijt using amuse recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 280bfedf2c9a19038e79d3402472bde30397a02c pip install -e . cd egs2/amuse/codec1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/owsm_pure_codec_v1.1_16k ``` ## Codec config <details><summary>expand</summary> ``` config: conf/train_sedac_large_v4.1.yaml print_config: false log_level: INFO drop_last_iter: false dry_run: false iterator_type: chunk valid_iterator_type: null output_dir: exp/codec_train_sedac_large_v4.1_raw_fs16000 ngpu: 1 seed: 777 num_workers: 1 num_att_plot: 0 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: true sharded_ddp: false use_deepspeed: false deepspeed_config: null cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: false use_tf32: false collect_stats: false write_collected_feats: false max_epoch: 360 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - mel_loss - min - - train - mel_loss - min - - train - total_count - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: 100 use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false use_adapter: false adapter: lora save_strategy: all adapter_conf: {} pretrain_path: null init_param: - /work/nvme/bbjs/shi3/codec/espnet/egs2/amuse/codec1/exp/codec_train_sedac_large_v4-0_raw_fs16000/latest.pth ignore_init_mismatch: false freeze_param: - codec.generator.encoder num_iters_per_epoch: 5000 batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null category_sample_size: 10 train_shape_file: - exp/codec_stats_raw/train/audio_shape valid_shape_file: - exp/codec_stats_raw/valid/audio_shape batch_type: unsorted valid_batch_type: null fold_length: - 256000 sort_in_batch: descending shuffle_within_batch: false sort_batch: descending multiple_iterator: false chunk_length: 32000 chunk_shift_ratio: 0.5 num_cache_chunks: 256 chunk_excluded_key_prefixes: [] chunk_default_fs: null chunk_max_abs_length: null chunk_discard_short_samples: true train_data_path_and_name_and_type: - - dump/raw/owsm_all/wav.scp - audio - kaldi_ark valid_data_path_and_name_and_type: - - dump/raw/dev-small/wav.scp - audio - kaldi_ark multi_task_dataset: false allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 allow_multi_rates: false valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adamw optim_conf: lr: 0.0002 betas: - 0.5 - 0.9 eps: 1.0e-09 weight_decay: 0.0 scheduler: exponentiallr scheduler_conf: gamma: 0.999875 optim2: adamw optim2_conf: lr: 0.0002 betas: - 0.5 - 0.9 eps: 1.0e-09 weight_decay: 0.0 scheduler2: exponentiallr scheduler2_conf: gamma: 0.999875 generator_first: true skip_discriminator_prob: 0.0 model_conf: {} use_preprocessor: true codec: se_dac2 codec_conf: sampling_rate: 16000 generator_params: hidden_dim: 512 codebook_dim: 512 se_model_source: espnet se_model_tag: wyz/tfgridnet_for_urgent24 enhanced_n_streams: 1 encdec_channels: 1 encdec_n_filters: 32 encdec_n_residual_layers: 3 encdec_ratios: - 8 - 5 - 4 - 2 encdec_activation: Snake encdec_norm: weight_norm encdec_kernel_size: 7 encdec_residual_kernel_size: 7 encdec_last_kernel_size: 7 encdec_dilation_base: 2 encdec_causal: false encdec_pad_mode: reflect encdec_true_skip: false encdec_compress: 2 encdec_lstm: 2 decoder_trim_right_ratio: 1.0 decoder_final_activation: null decoder_final_activation_params: null quantizer_n_q: 8 quantizer_bins: 1024 quantizer_decay: 0.99 quantizer_kmeans_init: true quantizer_kmeans_iters: 50 quantizer_threshold_ema_dead_code: 2 quantizer_target_bandwidth: - 1 - 2 - 4 quantizer_dropout: true sample_rate: 16000 inference_only: false discriminator_params: msmpmb_discriminator_params: rates: [] sample_rate: 16000 fft_sizes: - 1024 - 512 - 256 - 128 periods: - 2 - 3 - 5 - 7 - 11 period_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 5 - 3 channels: 32 downsample_scales: - 3 - 3 - 3 - 3 - 1 max_downsample_channels: 1024 bias: true nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false band_discriminator_params: hop_factor: 0.25 sample_rate: 16000 bands: - - 0.0 - 0.1 - - 0.1 - 0.25 - - 0.25 - 0.5 - - 0.5 - 0.75 - - 0.75 - 1.0 channel: 32 generator_adv_loss_params: average_by_discriminators: false loss_type: mse discriminator_adv_loss_params: average_by_discriminators: false loss_type: mse use_feat_match_loss: true feat_match_loss_params: average_by_discriminators: false average_by_layers: false include_final_outputs: true use_mel_loss: true mel_loss_params: range_start: 6 range_end: 11 window: hann n_mels: 80 fmin: 0 fmax: null log_base: null fs: 16000 skip_quantizer_updates: 0 lambda_quantization: 0.25 lambda_commit: 1.0 lambda_reconstruct: 1.0 lambda_adv: 1.0 lambda_mel: 45.0 lambda_feat_match: 2.0 enhanced_prob: 0.5 cache_generator_outputs: true required: - output_dir version: '202402' distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF
unsloth
2025-05-29T21:06:24Z
107,124
87
transformers
[ "transformers", "gguf", "llama4", "image-text-to-text", "facebook", "unsloth", "meta", "pytorch", "llama", "llama-4", "ar", "de", "en", "es", "fr", "hi", "id", "it", "pt", "th", "tl", "vi", "arxiv:2204.05149", "base_model:meta-llama/Llama-4-Scout-17B-16E-Instruct", "base_model:quantized:meta-llama/Llama-4-Scout-17B-16E-Instruct", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
image-text-to-text
2025-04-07T22:19:59Z
--- library_name: transformers language: - ar - de - en - es - fr - hi - id - it - pt - th - tl - vi base_model: - meta-llama/Llama-4-Scout-17B-16E-Instruct tags: - facebook - unsloth - meta - pytorch - llama - llama-4 extra_gated_prompt: >- **LLAMA 4 COMMUNITY LICENSE AGREEMENT** Llama 4 Version Effective Date: April 5, 2025 "**Agreement**" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "**Documentation**" means the specifications, manuals and documentation accompanying Llama 4 distributed by Meta at [https://www.llama.com/docs/overview](https://llama.com/docs/overview). "**Licensee**" or "**you**" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "**Llama 4**" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at [https://www.llama.com/llama-downloads](https://www.llama.com/llama-downloads). "**Llama Materials**" means, collectively, Meta’s proprietary Llama 4 and Documentation (and any portion thereof) made available under this Agreement. "**Meta**" or "**we**" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).  By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1\. **License Rights and Redistribution**. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.   b. Redistribution and Use.   i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display "Built with Llama" on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include "Llama" at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.  iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 4 is licensed under the Llama 4 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at [https://www.llama.com/llama4/use-policy](https://www.llama.com/llama4/use-policy)), which is hereby incorporated by reference into this Agreement.    2\. **Additional Commercial Terms**. If, on the Llama 4 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3**. Disclaimer of Warranty**. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4\. **Limitation of Liability**. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5\. **Intellectual Property**. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use "Llama" (the "Mark") solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at [https://about.meta.com/brand/resources/meta/company-brand/](https://about.meta.com/brand/resources/meta/company-brand/)[)](https://en.facebookbrand.com/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 4 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6\. **Term and Termination**. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.  7\. **Governing Law and Jurisdiction**. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: >- The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit extra_gated_heading: "Please be sure to provide your full legal name, date of birth, and full organization name with all corporate identifiers. Avoid the use of acronyms and special characters. Failure to follow these instructions may prevent you from accessing this model and others on Hugging Face. You will not have the ability to edit this form after submission, so please ensure all information is accurate." license: other license_name: llama4 --- <div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>See <a href="https://huggingface.co/collections/unsloth/llama-4-67f19503d764b0f3a2a868d2">our collection</a> for versions of Llama 4 including 4-bit & 16-bit formats.</strong> </p> <p style="margin-bottom: 0; margin-top: 0;"> <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic v2.0</a> achieves superior accuracy & outperforms other leading quant methods.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/unslothai/unsloth/"> <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133"> </a> <a href="https://discord.gg/unsloth"> <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173"> </a> <a href="https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms"> <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143"> </a> </div> <h1 style="margin-top: 0rem;">🦙 Run Unsloth Dynamic Llama 4 GGUF!</h1> </div> <p style="margin-bottom: 0;"> <em><a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4">Read our Guide</a> to see how to Fine-tune & Run Llama 4 correctly.</em> </p> |MoE Bits|Type|Disk Size|HF Link|Accuracy| |:-|:-|:-|:-|:-| |1.78bit|IQ1\_S|**33.8GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ1_S.gguf)|Ok| |1.93bit|IQ1\_M|**35.4GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ1_M.gguf)|Fair| |2.42-bit|IQ2\_XXS|**38.6GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-IQ2_XXS.gguf)|Better| |2.71-bit|Q2\_K\_XL|**42.2GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF?show_file_info=Llama-4-Scout-17B-16E-Instruct-UD-Q2_K_XL.gguf)|Suggested| |3.5-bit|Q3\_K\_XL|**52.9GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/tree/main/UD-Q3_K_XL)|Great| |4.5-bit|Q4\_K\_XL|**65.6GB**|[Link](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF/tree/main/UD-Q4_K_XL)|Best| Currently text only is supported. **Chat template/prompt format:** ``` <|header_start|>user<|header_end|>\n\nWhat is 1+1?<|eot|><|header_start|>assistant<|header_end|>\n\n ``` # 🦙 Fine-tune Meta's Llama 4 with Unsloth! - Fine-tune Llama-4-Scout on a single H100 80GB GPU using Unsloth! - Read our Blog about Llama 4 support: [unsloth.ai/blog/llama4](https://unsloth.ai/blog/llama4) - View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks). - Export your fine-tuned model to GGUF, Ollama, llama.cpp, vLLM or 🤗HF. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **GRPO with Llama 3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb) | 2x faster | 80% less | | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less | | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less | | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less | | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less | | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less | <br> ## Llama 4 Model Information The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding. These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts. **Model developer**: Meta **Model Architecture:** The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality. <table> <tr> <th>Model Name</th> <th>Training Data </th> <th>Params</th> <th>Input modalities</th> <th>Output modalities</th> <th>Context length</th> <th>Token count</th> <th>Knowledge cutoff</th> </tr> <tr> <td>Llama 4 Scout (17Bx16E) </td> <td rowspan="2">A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. Learn more in our <a href="https://www.facebook.com/privacy/guide/genai/">Privacy Center</a>. </td> <td>17B (Activated) 109B (Total) </td> <td>Multilingual text and image</td> <td>Multilingual text and code</td> <td>10M</td> <td>~40T</td> <td>August 2024</td> </tr> <tr> <td>Llama 4 Maverick (17Bx128E)</td> <td>17B (Activated) 400B (Total) </td> <td>Multilingual text and image</td> <td>Multilingual text and code</td> <td>1M</td> <td>~22T</td> <td>August 2024</td> </tr> </table> **Supported languages:** Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. **Model Release Date:** April 5, 2025 **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models may be released as we improve model behavior with community feedback. **License**: A custom commercial license, the Llama 4 Community License Agreement, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) **Where to send questions or comments about the model:** Instructions on how to provide feedback or comments on the model can be found in the Llama [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 4 in applications, please go [here](https://github.com/meta-llama/llama-cookbook). ## Intended Use **Intended Use Cases:** Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are intended for assistant-like chat and visual reasoning tasks, whereas pretrained models can be adapted for natural language generation. For vision, Llama 4 models are also optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The Llama 4 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 4 Community License allows for these use cases. **Out-of-scope**: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 4 Community License. Use in languages or capabilities beyond those explicitly referenced as supported in this model card\*\*. \*\*Note: 1\. Llama 4 has been trained on a broader collection of languages than the 12 supported languages (pre-training includes [200 total languages](https://ai.meta.com/research/no-language-left-behind/)). Developers may fine-tune Llama 4 models for languages beyond the 12 supported languages provided they comply with the Llama 4 Community License and the Acceptable Use Policy. Developers are responsible for ensuring that their use of Llama 4 in additional languages is done in a safe and responsible manner. 2\. Llama 4 has been tested for image understanding up to 5 input images. If leveraging additional image understanding capabilities beyond this, Developers are responsible for ensuring that their deployments are mitigated for risks and should perform additional testing and tuning tailored to their specific applications. ## How to use with transformers Please, make sure you have transformers `v4.51.0` installed, or upgrade using `pip install -U transformers`. ```python from transformers import AutoProcessor, Llama4ForConditionalGeneration import torch model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct" processor = AutoProcessor.from_pretrained(model_id) model = Llama4ForConditionalGeneration.from_pretrained( model_id, attn_implementation="flex_attention", device_map="auto", torch_dtype=torch.bfloat16, ) url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg" url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png" messages = [ { "role": "user", "content": [ {"type": "image", "url": url1}, {"type": "image", "url": url2}, {"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"}, ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate( **inputs, max_new_tokens=256, ) response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0] print(response) print(outputs[0]) ``` ## Hardware and Software **Training Factors:** We used custom training libraries, Meta's custom built GPU clusters, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure. **Training Energy Use:** Model pre-training utilized a cumulative of **7.38M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency. ## ## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **1,999 tons** CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with clean and renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq. | Model Name | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) | | :---- | :---: | :---: | :---: | :---: | | Llama 4 Scout | 5.0M | 700 | 1,354 | 0 | | Llama 4 Maverick | 2.38M | 700 | 645 | 0 | | Total | 7.38M | \- | 1,999 | 0 | ## The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others. ## Training Data **Overview:** Llama 4 Scout was pretrained on \~40 trillion tokens and Llama 4 Maverick was pretrained on \~22 trillion tokens of multimodal data from a mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI. **Data Freshness:** The pretraining data has a cutoff of August 2024\. ## Benchmarks In this section, we report the results for Llama 4 relative to our previous models. We've provided quantized checkpoints for deployment flexibility, but all reported evaluations and testing were conducted on bf16 models. ### Pre-trained models | Pre-trained models | | | | | | | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Category | Benchmark | \# Shots | Metric | Llama 3.1 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** | | Reasoning & Knowledge | MMLU | 5 | macro\_avg/acc\_char | 79.3 | 85.2 | 79.6 | 85.5 | | | MMLU-Pro | 5 | macro\_avg/em | 53.8 | 61.6 | 58.2 | 62.9 | | | MATH | 4 | em\_maj1@1 | 41.6 | 53.5 | 50.3 | 61.2 | | Code | MBPP | 3 | pass@1 | 66.4 | 74.4 | 67.8 | 77.6 | | Multilingual | TydiQA | 1 | average/f1 | 29.9 | 34.3 | 31.5 | 31.7 | | Image | ChartQA | 0 | relaxed\_accuracy | No multimodal support | | 83.4 | 85.3 | | | DocVQA | 0 | anls | | | 89.4 | 91.6 | ### Instruction tuned models | Instruction tuned models | | | | | | | | | :---: | :---: | :---: | :---: | :---: | ----- | :---: | :---: | | Category | Benchmark | \# Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** | | Image Reasoning | MMMU | 0 | accuracy | No multimodal support | | 69.4 | 73.4 | | | MMMU Pro^ | 0 | accuracy | | | 52.2 | 59.6 | | | MathVista | 0 | accuracy | | | 70.7 | 73.7 | | Image Understanding | ChartQA | 0 | relaxed\_accuracy | | | 88.8 | 90.0 | | | DocVQA (test) | 0 | anls | | | 94.4 | 94.4 | | Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 | | Reasoning & Knowledge | MMLU Pro | 0 | macro\_avg/acc | 68.9 | 73.4 | 74.3 | 80.5 | | | GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 | | Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 | | Long context | MTOB (half book) eng-\>kgv/kgv-\>eng | \- | chrF | Context window is 128K | | 42.2/36.6 | 54.0/46.4 | | | MTOB (full book) eng-\>kgv/kgv-\>eng | \- | chrF | | | 39.7/36.3 | 50.8/46.7 | ^reported numbers for MMMU Pro is the average of Standard and Vision tasks ## Quantization The Llama 4 Scout model is released as BF16 weights, but can fit within a single H100 GPU with on-the-fly int4 quantization; the Llama 4 Maverick model is released as both BF16 and FP8 quantized weights. The FP8 quantized weights fit on a single H100 DGX host while still maintaining quality. We provide code for on-the-fly int4 quantization which minimizes performance degradation as well. ## Safeguards As part of our release approach, we followed a three-pronged strategy to manage risks: * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama. * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm. * Provide protections for the community to help prevent the misuse of our models. Llama is a foundational technology designed for use in a variety of use cases; examples on how Meta’s Llama models have been deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology, by aligning our model’s safety for a standard set of risks. Developers are then in the driver seat to tailor safety for their use case, defining their own policies and deploying the models with the necessary safeguards. Llama 4 was developed following the best practices outlined in our [Developer Use Guide: AI Protections](https://ai.meta.com/static-resource/developer-use-guide-ai-protections). ### Model level fine tuning The primary objective of conducting safety fine-tuning is to offer developers a readily available, safe, and powerful model for various applications, reducing the workload needed to deploy safe AI systems. Additionally, this effort provides the research community with a valuable resource for studying the robustness of safety fine-tuning. **Fine-tuning data** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control. **Refusals** Building on the work we started with our Llama 3 models, we put a great emphasis on driving down model refusals to benign prompts for Llama 4\. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines. **Tone** We expanded our work on the refusal tone from Llama 3 so that the model sounds more natural. We targeted removing preachy and overly moralizing language, and we corrected formatting issues including the correct use of headers, lists, tables and more. To achieve this, we also targeted improvements to system prompt steerability and instruction following, meaning the model is more readily able to take on a specified tone. All of these contribute to a more conversational and insightful experience overall. **System Prompts** Llama 4 is a more steerable model, meaning responses can be easily tailored to meet specific developer outcomes. Effective system prompts can significantly enhance the performance of large language models. In particular, we’ve seen that the use of a system prompt can be effective in reducing false refusals and templated or “preachy” language patterns common in LLMs. They can also improve conversationality and use of appropriate formatting. Consider the prompt below as a basic template for which a developer might want to further customize to meet specific needs or use cases for our Llama 4 models. | System prompt | | :---- | | You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4\. Your knowledge cutoff date is August 2024\. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise. | ### Llama 4 system protections Large language models, including Llama 4, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional guardrails as required. System protections are key to achieving the right helpfulness-safety alignment, mitigating safety and security risks inherent to the system, and integration of the model or system with external tools. We provide the community with system level [protections](https://llama.meta.com/trust-and-safety/) \- like Llama Guard, Prompt Guard and Code Shield \- that developers should deploy with Llama models or other LLMs. All of our [reference implementation](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box. ### Evaluations We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, visual QA. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application. Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, coding or memorization. **Red teaming** We conduct recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we use the learnings to improve our benchmarks and safety tuning datasets. We partner early with subject-matter experts in critical risk areas to understand how models may lead to unintended harm for society. Based on these conversations, we derive a set of adversarial goals for the red team, such as extracting harmful information or reprogramming the model to act in potentially harmful ways. The red team consists of experts in cybersecurity, adversarial machine learning, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets. ### Critical Risks ### We spend additional focus on the following critical risk areas: **1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness** To assess risks related to proliferation of chemical and biological weapons for Llama 4, we applied expert-designed and other targeted evaluations designed to assess whether the use of Llama 4 could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. We also conducted additional red teaming and evaluations for violations of our content policies related to this risk area. **2\. Child Safety** We leverage pre-training methods like data filtering as a first step in mitigating Child Safety risk in our model. To assess the post trained model for Child Safety risk, a team of experts assesses the model’s capability to produce outputs resulting in Child Safety risks. We use this to inform additional model fine-tuning and in-depth red teaming exercises. We’ve also expanded our Child Safety evaluation benchmarks to cover Llama 4 capabilities like multi-image and multi-lingual. **3\. Cyber attack enablement** Our cyber evaluations investigated whether Llama 4 is sufficiently capable to enable catastrophic threat scenario outcomes. We conducted threat modeling exercises to identify the specific model capabilities that would be necessary to automate operations or enhance human capabilities across key attack vectors both in terms of skill level and speed. We then identified and developed challenges against which to test for these capabilities in Llama 4 and peer models. Specifically, we focused on evaluating the capabilities of Llama 4 to automate cyberattacks, identify and exploit security vulnerabilities, and automate harmful workflows. Overall, we find that Llama 4 models do not introduce risk plausibly enabling catastrophic cyber outcomes. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Trust tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Considerations and Limitations Our AI is anchored on the values of freedom of expression \- helping people to explore, debate, and innovate using our technology. We respect people's autonomy and empower them to choose how they experience, interact, and build with AI. Our AI promotes an open exchange of ideas. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 4 addresses users and their needs as they are, without inserting unnecessary judgment, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. Llama 4 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 4’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 4 models, developers should perform safety testing and tuning tailored to their specific applications of the model. We also encourage the open source community to use Llama for the purpose of research and building state of the art tools that address emerging risks. Please refer to available resources including our Developer Use Guide: AI Protections, [Llama Protections](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more.
recursechat/Qwen3-8B-GGUF
recursechat
2025-05-29T21:05:08Z
0
0
null
[ "gguf", "base_model:Qwen/Qwen3-8B", "base_model:quantized:Qwen/Qwen3-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-29T21:05:07Z
--- license: apache-2.0 base_model: Qwen/Qwen3-8B --- # Qwen3-8B Original model: https://huggingface.co/Qwen/Qwen3-8B
BootesVoid/cmb9r7m6t0hik1b1yzn6a4vfx_cmb9t17t80i6f1b1ya3g0m0z5
BootesVoid
2025-05-29T21:05:07Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-29T21:05:06Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MADHU --- # Cmb9R7M6T0Hik1B1Yzn6A4Vfx_Cmb9T17T80I6F1B1Ya3G0M0Z5 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MADHU` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MADHU", "lora_weights": "https://huggingface.co/BootesVoid/cmb9r7m6t0hik1b1yzn6a4vfx_cmb9t17t80i6f1b1ya3g0m0z5/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmb9r7m6t0hik1b1yzn6a4vfx_cmb9t17t80i6f1b1ya3g0m0z5', weight_name='lora.safetensors') image = pipeline('MADHU').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmb9r7m6t0hik1b1yzn6a4vfx_cmb9t17t80i6f1b1ya3g0m0z5/discussions) to add images that show off what you’ve made with this LoRA.
asazheng/MNLP_M2_mcqa_model
asazheng
2025-05-29T21:04:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-29T21:03:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OliDo99/testing
OliDo99
2025-05-29T21:01:32Z
10
0
transformers
[ "transformers", "pytorch", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-28T13:30:06Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** OliDo99 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
morturr/Llama-2-7b-hf-amazon-2025-05-29
morturr
2025-05-29T21:01:10Z
2
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2025-05-28T22:44:58Z
--- library_name: peft license: llama2 base_model: meta-llama/Llama-2-7b-hf tags: - trl - sft - generated_from_trainer model-index: - name: Llama-2-7b-hf-amazon-2025-05-29 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-amazon-2025-05-29 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.0.2 - Tokenizers 0.20.1
glif-loradex-trainer/Swap_agrawal14_kuki_comics
glif-loradex-trainer
2025-05-29T21:00:47Z
0
0
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2025-05-29T21:00:33Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1748552366047__000002000_0.jpg text: Comic about wedding $wap_kuki_comics - output: url: samples/1748552391345__000002000_1.jpg text: Comic about first rain & broken heart $wap_kuki_comics - output: url: samples/1748552416643__000002000_2.jpg text: Comic about learning music to professional in music $wap_kuki_comics base_model: black-forest-labs/FLUX.1-dev trigger: "$wap_kuki_comics" instance_prompt: "$wap_kuki_comics" license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # kuki_comics Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Swap_agrawal14`. <Gallery /> ## Trigger words You should use `$wap_kuki_comics` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/Swap_agrawal14_kuki_comics/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).