modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-22 06:27:16
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-22 06:26:41
card
stringlengths
11
1.01M
Ayesha-Khan-Official-Viral-Video/FULL.VIDEO.Ayesha.Khan.Viral.Video.Tutorial.Official
Ayesha-Khan-Official-Viral-Video
2025-06-16T05:36:03Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:35:35Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
s-emanuilov/Tucan-2.6B-v1.0
s-emanuilov
2025-06-16T05:35:49Z
187
1
null
[ "safetensors", "gemma2", "function_calling", "MCP", "tool_use", "bg", "arxiv:2503.23278", "arxiv:2408.00118", "arxiv:2412.10893", "base_model:INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0", "base_model:finetune:INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0", "license:gemma", "region:us" ]
null
2025-06-07T21:26:39Z
--- license: gemma language: - bg base_model: - INSAIT-Institute/BgGPT-Gemma-2-2.6B-IT-v1.0 tags: - function_calling - MCP - tool_use --- # Tucan-2.6B-v1.0 ## Bulgarian Language Models for Function Calling 🇧🇬 > 📄 **Full methodology, dataset details, and evaluation results coming in the upcoming paper** ## Overview 🚀 TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use. These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and [Model Context Protocol (MCP)](https://arxiv.org/abs/2503.23278) applications. Built on top of [BgGPT models](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe) from [INSAIT Institute](https://insait.ai/), which were themselves built on [Gemma 2](https://arxiv.org/pdf/2408.00118), Tucan models have been enhanced with function-calling capabilities. ## Motivation 🎯 Although BgGPT models demonstrate [strong Bulgarian language comprehension](https://arxiv.org/pdf/2412.10893), they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal. This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language. ## Models and variants 📦 Available in three sizes with full models, LoRA adapters, and quantized GGUF variants: <div align="center"> | Model Size | Full Model | LoRA Adapter | GGUF (Quantized) | |------------|------------|--------------|------------------| | **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0) 📍| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF) | | **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) | | **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF) | *GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations* 📍 *Current model/repo* </div> Models and quantizations are also available for easy use in Ollama: https://ollama.com/s_emanuilov/tucan ## Benchmarks 📊 All evaluations were performed using the [Tucan evaluation framework](https://github.com/s-emanuilov/tucan), with results averaged across multiple runs. Tucan models demonstrate superior function-calling capabilities compared to their BgGPT counterparts, with particularly strong improvements in smaller model sizes. To ensure no catastrophic forgetting occurred, we evaluated knowledge retention using [EleutherAI's lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) on Bulgarian benchmarks, confirming that each Tucan model maintains performance on par with its BgGPT equivalent. <div align="center"> | Model | Function Calling | HellaswagBG | WinograndeBG | ARC-Easy-BG | ARC-Challenge-BG | |-------|-----------------|-------------|--------------|-------------|------------------| | **Tucan-2.6B-v1.0** 🔥 | **0.7875** | 0.5924 | 0.6456 | 0.5657 | 0.3754 | | **Tucan-9B-v1.0** 🔥 | **0.8667** | 0.7046 | 0.7151 | 0.7024 | 0.5188 | | **Tucan-27B-v1.0** 🔥 | **0.875** | 0.6179 | 0.6275 | 0.6486 | 0.442 | | BgGPT-Gemma-2-2.6B-IT-v1.0 | 0.5874 | 0.6306 | 0.5821 | 0.5657 | 0.372 | | BgGPT-Gemma-2-9B-IT-v1.0 | 0.7833 | 0.7057 | 0.719 | 0.7231 | 0.5188 | | BgGPT-Gemma-2-27B-IT-v1.0 | 0.8667 | 0.62 | 0.6212 | 0.6587 | 0.459 | *Note: 27B models were evaluated in 8-bit precision for comparison purposes.* </div> ## Usage 🛠️ ### Quick start ⚡ ```bash pip install -U "transformers[torch]" accelerate bitsandbytes ``` ### Prompt format ⚙️ **Critical:** Use this format for function calling for the best results. <details> <summary><strong>📋 Required system prompt template</strong></summary> ``` <bos><start_of_turn>user Ти си полезен AI асистент, който предоставя полезни и точни отговори. Имаш достъп и можеш да извикаш една или повече функции, за да помогнеш с потребителското запитване. Използвай ги, само ако е необходимо и подходящо. Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```. ## Шаблон за извикване: ```tool_call {"name": <function-name>, "arguments": <args-json-object>}``` ## Налични функции: [your function definitions here] ## Потребителска заявка: [your query in Bulgarian]<end_of_turn> <start_of_turn>model ``` </details> ### Note 📝 **The model only generates the `tool_call` blocks with function names and parameters - it doesn't actually execute the functions.** Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in `tool_response` blocks for the conversation to continue the interperation of the results. A full demo is comming soon. ### Python example 🐍 <details> <summary><strong>💻 Complete Working Example</strong></summary> ```python import torch import json from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig # Load model model_name = "s-emanuilov/Tucan-2.6B-v1.0" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, device_map="auto", attn_implementation="eager" # Required for Gemma models ) # Create prompt with system template def create_prompt(functions, user_query): system_prompt = """Ти си полезен AI асистент, който предоставя полезни и точни отговори. Имаш достъп и можеш да извикаш една или повече функции, за да помогнеш с потребителското запитване. Използвай ги, само ако е необходимо и подходящо. Когато използваш функция, форматирай извикването ѝ в блок ```tool_call``` на отделен ред, a след това ще получиш резултат от изпълнението в блок ```toll_response```. ## Шаблон за извикване: ```tool_call {{"name": <function-name>, "arguments": <args-json-object>}}``` """ functions_text = json.dumps(functions, ensure_ascii=False, indent=2) full_prompt = f"{system_prompt}\n## Налични функции:\n{functions_text}\n\n## Потребителска заявка:\n{user_query}" chat = [{"role": "user", "content": full_prompt}] return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) # Example usage functions = [{ "name": "create_calendar_event", "description": "Creates a new event in Google Calendar.", "parameters": { "type": "object", "properties": { "title": {"type": "string"}, "date": {"type": "string"}, "start_time": {"type": "string"}, "end_time": {"type": "string"} }, "required": ["title", "date", "start_time", "end_time"] } }] query = "Създай събитие 'Годишен преглед' за 8-ми юни 2025 от 14:00 до 14:30." # Generate response prompt = create_prompt(functions, query) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **inputs, max_new_tokens=2048, temperature=0.1, top_k=25, top_p=1.0, repetition_penalty=1.1, do_sample=True, eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")], pad_token_id=tokenizer.eos_token_id ) result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) print(result) ``` </details> ## Performance & Dataset 📊 > 📄 **Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper** **Dataset:** 10,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4). **Results:** Significant improvements in tool-use capabilities over base BgGPT models: 34.1% for 2.6B, 10.6% for 9B, and 1.0% for 27B models in [internal benchmarks](https://github.com/s-emanuilov/tucan). Beyond raw function-calling scores, all Tucan models demonstrate more natural conversational flow while maintaining tool-use capabilities, retaining their base knowledge. ## Acknowledgments 🙏 Built on top of [BgGPT series](https://huggingface.co/collections/INSAIT-Institute/bggpt-gemma-2-673b972fe9902749ac90f6fe). ## Questions & Contact 💬 For questions, collaboration, or feedback: **[Connect on LinkedIn](https://www.linkedin.com/in/simeon-emanuilov/)**
viral-othoi-113-video-link/TRENDING.Top.Othoi.Official.Viral.Video
viral-othoi-113-video-link
2025-06-16T05:35:32Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:35:22Z
01 seconds ago [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html) [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html) <a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Nirmit1/lora_bart_base_model
Nirmit1
2025-06-16T05:34:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T05:16:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.25_0.05_epoch1
MinaMila
2025-06-16T05:32:52Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T05:31:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.5_0.05_epoch2
MinaMila
2025-06-16T05:32:24Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T05:30:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KaiChen1998/RACRO-7B-CRO
KaiChen1998
2025-06-16T05:28:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "multi-modal-reasoning", "conversational", "dataset:TIGER-Lab/ViRL39K", "arxiv:2506.04559", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-15T13:22:47Z
--- library_name: transformers tags: - multi-modal-reasoning license: apache-2.0 datasets: - TIGER-Lab/ViRL39K base_model: - Qwen/Qwen2.5-VL-7B-Instruct new_version: KaiChen1998/RACRO-7B-CRO-GRPO --- # RACRO-7B-CRO <div align="center"> 📄 [Paper](https://arxiv.org/abs/2506.04559) | 💻 [Github](https://github.com/gyhdog99/RACRO2/) | 🤗 [RACRO-Models](https://huggingface.co/collections/KaiChen1998/racro-6848ec8c65b3a0bf33d0fbdb) | 🤗 [RACRO-Demo](https://huggingface.co/spaces/Emova-ollm/RACRO-demo) </div> ## Model Summary **RACRO** (Reasoning-Aligned Perceptual Decoupling via Caption Reward Optimization) is a novel framework that enables scalable and modular multimodal reasoning by aligning visual perception with a powerful text-only reasoner. RACRO addresses the key challenge of generating image captions that are both faithful and sufficiently informative for downstream reasoning. It leverages a reasoning-guided reinforcement learning strategy to train the visual extractor, using reward signals derived from the performance of a fixed, high-capacity text-only LLM. This decoupled design avoids costly retraining of vision-language alignments and allows seamless plug-and-play upgrades to more advanced reasoners. Experiments on multimodal math and science benchmarks show that RACRO achieves **state-of-the-art** performance among open models. <div align="center"> <img src="https://github.com/gyhdog99/RACRO2/blob/main/assets/images/intro.png?raw=true" width=100%></img> </div> ## Results <div align="center"> <img src="https://github.com/gyhdog99/RACRO2/blob/main/assets/images/results.png?raw=true" width=100%></img> </div> ## Usage ```python from transformers import AutoProcessor, AutoTokenizer from vllm import LLM, SamplingParams from qwen_vl_utils import process_vision_info ######################## # === Configuration === ######################## IMAGE_PATH = "./assets/images/demo_example.jpg" QUESTION = "When the canister is momentarily stopped by the spring, by what distance $d$ is the spring compressed?" MLLM_MODEL_PATH = "KaiChen1998/RACRO-7B-CRO" LLM_MODEL_PATH = "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B" # feel free to use more advanced reasoners! ######################## # === Prompts === ######################## SYSTEM_PROMPT_CAP = "You are given an image and a relevant question. Based on the query, please describe the image in details. Do not try to answer the question." SYSTEM_PROMPT_LLM = "You are a helpful assistant." CAPTION_PROMPT = "Question: {}\nPlease describe the image. DO NOT try to answer the question!" LLM_PROMPT = """In the following text, you will receive a detailed caption of an image and a relevant question. In addition, you will be provided with a tentative model response. You goal is to answer the question using these information. ### The detailed caption of the provided image: {} ### Note that the caption might contain incorrect solutions, do not be misguided by them. ### A problem to be solved: {} ### A tentative model response: {} ### Note that the above tentative response might be inaccurate (due to calculation errors, incorrect logic/reasoning and so on), under such a case, please ignore it and give your own solutions. However, if you do not have enough evidence to show it is wrong, please output the tentative response.""" ######################## # === Initialize Models === ######################## processor = AutoProcessor.from_pretrained(MLLM_MODEL_PATH) tokenizer = AutoTokenizer.from_pretrained(LLM_MODEL_PATH) mllm = LLM(model=MLLM_MODEL_PATH, tensor_parallel_size=1, gpu_memory_utilization=0.8, device='cuda:0', dtype="bfloat16", limit_mm_per_prompt={"image": 1}) llm = LLM(model=LLM_MODEL_PATH, tensor_parallel_size=1, gpu_memory_utilization=0.8, device='cuda:1', dtype="bfloat16") mllm_sampling = SamplingParams(temperature=0, max_tokens=8192) llm_sampling = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192) ######################## # === Build Prompts === ######################## def build_messages(image_path, question): cap_msgs = [ {"role": "system", "content": SYSTEM_PROMPT_CAP}, {"role": "user", "content": [{"type": "image", "image": image_path}, {"type": "text", "text": CAPTION_PROMPT.format(question)}]} ] qa_msgs = [ {"role": "user", "content": [{"type": "image", "image": image_path}, {"type": "text", "text": question + " Please think step by step. The final answer MUST BE put in \\boxed{}."}]} ] return cap_msgs, qa_msgs # === Run Captioning and QA === def run_mllm(image_tensor, cap_prompt, qa_prompt): cap_output = mllm.generate([{"multi_modal_data": {"image": image_tensor}, "prompt": cap_prompt[0]}], sampling_params=mllm_sampling) qa_output = mllm.generate([{"multi_modal_data": {"image": image_tensor}, "prompt": qa_prompt[0]}], sampling_params=mllm_sampling) return cap_output[0].outputs[0].text, qa_output[0].outputs[0].text # === Final Reasoning Step === def run_llm_reasoning(caption, question, answer): messages = [ {"role": "system", "content": SYSTEM_PROMPT_LLM}, {"role": "user", "content": LLM_PROMPT.format(caption, question, answer)} ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) output = llm.generate([{"prompt": prompt}], sampling_params=llm_sampling) return output[0].outputs[0].text ######################## # === Pipeline === ######################## cap_msgs, qa_msgs = build_messages(IMAGE_PATH, QUESTION) cap_prompt = processor.apply_chat_template([cap_msgs], tokenize=False, add_generation_prompt=True) qa_prompt = processor.apply_chat_template([qa_msgs], tokenize=False, add_generation_prompt=True) image_tensor, _ = process_vision_info(cap_msgs) caption_text, tentative_answer = run_mllm(image_tensor, cap_prompt, qa_prompt) final_answer = run_llm_reasoning(caption_text, QUESTION, tentative_answer) print("Final Answer:\n", final_answer) ``` ## Citation ```bibtex @article{gou2025perceptual, author = {Gou, Yunhao and Chen, Kai and Liu, Zhili and Hong, Lanqing and Jin, Xin and Li, Zhenguo and Kwok, James T. and Zhang, Yu}, title = {Perceptual Decoupling for Scalable Multi-modal Reasoning via Reward-Optimized Captioning}, journal = {arXiv preprint arXiv:2506.04559}, year = {2025}, } ```
KaiChen1998/RACRO-7B-CRO-GRPO
KaiChen1998
2025-06-16T05:28:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "multi-modal-reasoning", "conversational", "dataset:TIGER-Lab/ViRL39K", "arxiv:2506.04559", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-15T14:12:53Z
--- library_name: transformers tags: - multi-modal-reasoning license: apache-2.0 datasets: - TIGER-Lab/ViRL39K base_model: - Qwen/Qwen2.5-VL-7B-Instruct --- # RACRO-7B-CRO-GRPO <div align="center"> 📄 [Paper](https://arxiv.org/abs/2506.04559) | 💻 [Github](https://github.com/gyhdog99/RACRO2/) | 🤗 [RACRO-Models](https://huggingface.co/collections/KaiChen1998/racro-6848ec8c65b3a0bf33d0fbdb) | 🤗 [RACRO-Demo](https://huggingface.co/spaces/Emova-ollm/RACRO-demo) </div> ## Model Summary **RACRO** (Reasoning-Aligned Perceptual Decoupling via Caption Reward Optimization) is a novel framework that enables scalable and modular multimodal reasoning by aligning visual perception with a powerful text-only reasoner. RACRO addresses the key challenge of generating image captions that are both faithful and sufficiently informative for downstream reasoning. It leverages a reasoning-guided reinforcement learning strategy to train the visual extractor, using reward signals derived from the performance of a fixed, high-capacity text-only LLM. This decoupled design avoids costly retraining of vision-language alignments and allows seamless plug-and-play upgrades to more advanced reasoners. Experiments on multimodal math and science benchmarks show that RACRO achieves **state-of-the-art** performance among open models. <div align="center"> <img src="https://github.com/gyhdog99/RACRO2/blob/main/assets/images/intro.png?raw=true" width=100%></img> </div> ## Results <div align="center"> <img src="https://github.com/gyhdog99/RACRO2/blob/main/assets/images/results.png?raw=true" width=100%></img> </div> ## Usage ```python from transformers import AutoProcessor, AutoTokenizer from vllm import LLM, SamplingParams from qwen_vl_utils import process_vision_info ######################## # === Configuration === ######################## IMAGE_PATH = "./assets/images/demo_example.jpg" QUESTION = "When the canister is momentarily stopped by the spring, by what distance $d$ is the spring compressed?" MLLM_MODEL_PATH = "KaiChen1998/RACRO-7B-CRO-GRPO" LLM_MODEL_PATH = "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B" # feel free to use more advanced reasoners! ######################## # === Prompts === ######################## SYSTEM_PROMPT_CAP = "You are given an image and a relevant question. Based on the query, please describe the image in details. Do not try to answer the question." SYSTEM_PROMPT_LLM = "You are a helpful assistant." CAPTION_PROMPT = "Question: {}\nPlease describe the image. DO NOT try to answer the question!" LLM_PROMPT = """In the following text, you will receive a detailed caption of an image and a relevant question. In addition, you will be provided with a tentative model response. You goal is to answer the question using these information. ### The detailed caption of the provided image: {} ### Note that the caption might contain incorrect solutions, do not be misguided by them. ### A problem to be solved: {} ### A tentative model response: {} ### Note that the above tentative response might be inaccurate (due to calculation errors, incorrect logic/reasoning and so on), under such a case, please ignore it and give your own solutions. However, if you do not have enough evidence to show it is wrong, please output the tentative response.""" ######################## # === Initialize Models === ######################## processor = AutoProcessor.from_pretrained(MLLM_MODEL_PATH) tokenizer = AutoTokenizer.from_pretrained(LLM_MODEL_PATH) mllm = LLM(model=MLLM_MODEL_PATH, tensor_parallel_size=1, gpu_memory_utilization=0.8, device='cuda:0', dtype="bfloat16", limit_mm_per_prompt={"image": 1}) llm = LLM(model=LLM_MODEL_PATH, tensor_parallel_size=1, gpu_memory_utilization=0.8, device='cuda:1', dtype="bfloat16") mllm_sampling = SamplingParams(temperature=0, max_tokens=8192) llm_sampling = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192) ######################## # === Build Prompts === ######################## def build_messages(image_path, question): cap_msgs = [ {"role": "system", "content": SYSTEM_PROMPT_CAP}, {"role": "user", "content": [{"type": "image", "image": image_path}, {"type": "text", "text": CAPTION_PROMPT.format(question)}]} ] qa_msgs = [ {"role": "user", "content": [{"type": "image", "image": image_path}, {"type": "text", "text": question + " Please think step by step. The final answer MUST BE put in \\boxed{}."}]} ] return cap_msgs, qa_msgs # === Run Captioning and QA === def run_mllm(image_tensor, cap_prompt, qa_prompt): cap_output = mllm.generate([{"multi_modal_data": {"image": image_tensor}, "prompt": cap_prompt[0]}], sampling_params=mllm_sampling) qa_output = mllm.generate([{"multi_modal_data": {"image": image_tensor}, "prompt": qa_prompt[0]}], sampling_params=mllm_sampling) return cap_output[0].outputs[0].text, qa_output[0].outputs[0].text # === Final Reasoning Step === def run_llm_reasoning(caption, question, answer): messages = [ {"role": "system", "content": SYSTEM_PROMPT_LLM}, {"role": "user", "content": LLM_PROMPT.format(caption, question, answer)} ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) output = llm.generate([{"prompt": prompt}], sampling_params=llm_sampling) return output[0].outputs[0].text ######################## # === Pipeline === ######################## cap_msgs, qa_msgs = build_messages(IMAGE_PATH, QUESTION) cap_prompt = processor.apply_chat_template([cap_msgs], tokenize=False, add_generation_prompt=True) qa_prompt = processor.apply_chat_template([qa_msgs], tokenize=False, add_generation_prompt=True) image_tensor, _ = process_vision_info(cap_msgs) caption_text, tentative_answer = run_mllm(image_tensor, cap_prompt, qa_prompt) final_answer = run_llm_reasoning(caption_text, QUESTION, tentative_answer) print("Final Answer:\n", final_answer) ``` ## Citation ```bibtex @article{gou2025perceptual, author = {Gou, Yunhao and Chen, Kai and Liu, Zhili and Hong, Lanqing and Jin, Xin and Li, Zhenguo and Kwok, James T. and Zhang, Yu}, title = {Perceptual Decoupling for Scalable Multi-modal Reasoning via Reward-Optimized Captioning}, journal = {arXiv preprint arXiv:2506.04559}, year = {2025}, } ```
DhanasriArul/Model2vec
DhanasriArul
2025-06-16T05:26:31Z
0
0
model2vec
[ "model2vec", "safetensors", "embeddings", "static-embeddings", "sentence-transformers", "license:mit", "region:us" ]
null
2025-06-16T05:15:41Z
--- base_model: unknown library_name: model2vec license: mit model_name: my_classifier_pipeline tags: - embeddings - static-embeddings - sentence-transformers --- # my_classifier_pipeline Model Card This [Model2Vec](https://github.com/MinishLab/model2vec) model is a fine-tuned version of the [unknown](https://huggingface.co/unknown) Model2Vec model. It also includes a classifier head on top. ## Installation Install model2vec using pip: ``` pip install model2vec[inference] ``` ## Usage Load this model using the `from_pretrained` method: ```python from model2vec.inference import StaticModelPipeline # Load a pretrained Model2Vec model model = StaticModelPipeline.from_pretrained("my_classifier_pipeline") # Predict labels predicted = model.predict(["Example sentence"]) ``` ## Additional Resources - [Model2Vec Repo](https://github.com/MinishLab/model2vec) - [Model2Vec Base Models](https://huggingface.co/collections/minishlab/model2vec-base-models-66fd9dd9b7c3b3c0f25ca90e) - [Model2Vec Results](https://github.com/MinishLab/model2vec/tree/main/results) - [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials) - [Website](https://minishlab.github.io/) ## Library Authors Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled). ## Citation Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work. ``` @article{minishlab2024model2vec, author = {Tulkens, Stephan and {van Dongen}, Thomas}, title = {Model2Vec: Fast State-of-the-Art Static Embeddings}, year = {2024}, url = {https://github.com/MinishLab/model2vec} } ```
tv-nulook-india-18k/Original.Full.Clip.nulook.india.Viral.Videos.Leaks.Official
tv-nulook-india-18k
2025-06-16T05:25:09Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:24:03Z
Nulookindia Video New collections of Nulookindia Video now being a creator on Fanfix uploading adult contents <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.5_0.05_epoch1
MinaMila
2025-06-16T05:24:32Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T05:22:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIDEO-Nirma-Meena-Viral-Video-tv/Full.VIDEO.Nirma.Meena.Viral.Video.Tutorial
VIDEO-Nirma-Meena-Viral-Video-tv
2025-06-16T05:22:07Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:21:55Z
01 seconds ago [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html) [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html) <a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.25_0.15_epoch1
MinaMila
2025-06-16T05:19:22Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T05:17:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TV-nulook-india-viral-video/nulook.india.viral.video.original.nulook.india.nulookindia.video.mms
TV-nulook-india-viral-video
2025-06-16T05:18:06Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:17:05Z
"03 Second — Nulookindia Video New collections of Nulookindia Video now being a creator on Fanfix uploading adult contents <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
nulook-india-viral-video/nulook.india.viral.video.original.nulook.india.nulookindia.video.mms
nulook-india-viral-video
2025-06-16T05:18:01Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:16:56Z
"03 Second — Nulookindia Video New collections of Nulookindia Video now being a creator on Fanfix uploading adult contents <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
Ashwani-0101/YOLOV11M-Construction
Ashwani-0101
2025-06-16T05:17:42Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:39:25Z
# YOLO 11M - Confusion Matrix and Other Result Normalized Confusion Matrix (Closer to 1, better the performance for the specific class it has): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6777bd2e1ce483804f066776/KMdrlOMNP-T3wO9OmPOU3.png) Compiled results: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6777bd2e1ce483804f066776/mGlBjRr0L0B6R_gOwSK2d.png)
Ashwani-0101/Yolo11L-construction
Ashwani-0101
2025-06-16T05:16:51Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:32:53Z
# YOLO 11L - Confusion Matrix and Other Result Normalized Confusion Matrix (Closer to 1, better the performance for the specific class it has): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6777bd2e1ce483804f066776/h9YcivDaeGYyC2eQWSeCR.png) Other Results Compiled: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6777bd2e1ce483804f066776/q-uCM6uSc4c0HzyFfmBL6.png)
rafamartins/rafa.martins.e.cadeirante.twitter.rafa.martins.e.cadeirante.twitter
rafamartins
2025-06-16T05:16:37Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:15:29Z
Watch 🟢 ➤ ➤ ➤ <a href="https://viraltrendzzz.com/scsscsc"> 🌐 Click Here To link (Watch-Video-18rafa.martins.e.cadeirante.twitter) 🔴 ➤►DOWNLOAD👉👉🟢 ➤Watch 🟢 ➤ ➤ ➤ <a href="https://viraltrendzzz.com/scsscsc"> 🌐 Watch-Video-18rafa.martins.e.cadeirante.twitter
Pakistani-Viral-Video/VIDEO.Pakistani.Viral.Video.Tutorial.Official
Pakistani-Viral-Video
2025-06-16T05:13:08Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:10:21Z
<a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Bearrr310/train_grpo_7B_unsloth_0616_100step
Bearrr310
2025-06-16T05:12:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "unsloth", "trl", "grpo", "dataset:unsloth-7B-reward-0616-100step", "arxiv:2402.03300", "endpoints_compatible", "region:us" ]
null
2025-06-16T05:12:02Z
--- base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit datasets: unsloth-7B-reward-0616-100step library_name: transformers model_name: train_grpo_7B_unsloth_0616_100step tags: - generated_from_trainer - unsloth - trl - grpo licence: license --- # Model Card for train_grpo_7B_unsloth_0616_100step This model is a fine-tuned version of [unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit) on the [unsloth-7B-reward-0616-100step](https://huggingface.co/datasets/unsloth-7B-reward-0616-100step) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Bearrr310/train_grpo_7B_unsloth_0616_100step", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.7.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
wATCH-Meezo-Fun-Video/Video.meezo.fun.video.meezo.fun.mezo.fun.meezo.fun.Original
wATCH-Meezo-Fun-Video
2025-06-16T05:12:05Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:11:52Z
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 Video](https://tinyurl.com/modasnap?fkisreal) [🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐 Video](https://tinyurl.com/modasnap?fkisreal) <a href="https://tinyurl.com/modasnap?fkisreal" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
hitty28/branch-switch-classifier
hitty28
2025-06-16T05:11:00Z
0
0
null
[ "safetensors", "distilbert", "text-classification", "branch-switching", "hospital-chatbot", "en", "dataset:branch_switch_classification", "license:apache-2.0", "region:us" ]
text-classification
2025-06-16T05:10:40Z
--- language: en license: apache-2.0 tags: - text-classification - branch-switching - hospital-chatbot - distilbert datasets: - branch_switch_classification widget: - text: "I want to switch to Mumbai branch" - text: "What are your hospital timings?" - text: "Can I change to the branch near my home?" --- # Branch Switch Classification Model This model classifies whether a user wants to switch hospital branches or is asking for general information. ## Model Description - **Model**: DistilBERT for Sequence Classification - **Task**: Binary Classification - **Domain**: Hospital/Healthcare Chatbot - **Classes**: - `True`: User wants to switch branches - `False`: General query/information seeking ## Usage ```python from transformers import DistilBertTokenizer, DistilBertForSequenceClassification import torch # Load model and tokenizer tokenizer = DistilBertTokenizer.from_pretrained("hitty28/branch-switch-classifier") model = DistilBertForSequenceClassification.from_pretrained("hitty28/branch-switch-classifier") # Predict def predict(text): inputs = tokenizer(text, truncation=True, padding='max_length', max_length=128, return_tensors='pt') with torch.no_grad(): outputs = model(**inputs) predictions = torch.nn.functional.softmax(outputs.logits, dim=-1) predicted_class = torch.argmax(predictions, dim=-1).item() return bool(predicted_class) # Example result = predict("I want to switch to Delhi branch") print(result) # True ``` ## Training Data The model was trained on a comprehensive dataset including: - Direct branch switch requests - Location-specific switches - Facility-based switches - Information queries about branches - Medical service inquiries - Edge cases and ambiguous statements ## Performance The model achieves high accuracy in distinguishing between branch switching intents and general information queries.
AhmadAli223/llama3.2_Final_FYP
AhmadAli223
2025-06-16T05:09:39Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-06-16T05:09:39Z
--- license: apache-2.0 ---
Sawu-Low3/t5-base-lora-stage3
Sawu-Low3
2025-06-16T05:08:41Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T05:08:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sawu-Low3/t5-base-lora-stage2
Sawu-Low3
2025-06-16T05:08:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T05:08:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Bu-Guru-Salsa-viral-18k/Bu.Guru.Salsa.Jember.Video.Viral
Bu-Guru-Salsa-viral-18k
2025-06-16T05:07:34Z
0
0
null
[ "region:us" ]
null
2025-06-15T11:11:45Z
"03 Second — Bu Guru Salsa Jember Video New collections of Bu Guru Salsa Jember Video now being a creator on Fanfix uploading adult contents. <a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
Videos-Hawk-Tuah-Girl-Original-Video/Original.Full.Clip.Hawk.Tuah.Viral.Video.Leaks.Official
Videos-Hawk-Tuah-Girl-Original-Video
2025-06-16T05:07:23Z
0
0
null
[ "region:us" ]
null
2025-06-16T05:07:03Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
stablediffusionapi/ht_photorealism
stablediffusionapi
2025-06-16T05:06:34Z
3
0
diffusers
[ "diffusers", "safetensors", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-08T14:54:55Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # ht_photorealism API Inference ![generated from stablediffusionapi.com](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/380bc155-f23d-4587-9127-3eb7bdda0f28/width=768/00946-2910078626.jpeg) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "ht_photorealism" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/ht_photorealism) Model link: [View model](https://stablediffusionapi.com/models/ht_photorealism) Credits: [View credits](https://civitai.com/?query=ht_photorealism) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "ht_photorealism", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.25_0.25_epoch1
MinaMila
2025-06-16T05:05:58Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T05:04:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
japat123/gemma_jun16_2
japat123
2025-06-16T05:03:08Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "base_model:quantized:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-16T05:02:22Z
--- base_model: unsloth/gemma-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** japat123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
2-Wolf-1-Girl-viral-video-original-Link/FULL.VIDEO.two.wolf.one.girl.Viral.Video.Tutorial.Official
2-Wolf-1-Girl-viral-video-original-Link
2025-06-16T05:01:28Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:58:48Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
rmdhirr/suja-lorab-ep5-suja-6000
rmdhirr
2025-06-16T05:00:43Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:rmdhirr/merged-suja-latest", "base_model:adapter:rmdhirr/merged-suja-latest", "region:us" ]
null
2025-06-16T04:59:37Z
--- base_model: rmdhirr/merged-suja-latest library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
MinaMila/gemma_2b_unlearned_2nd_1e-5_1.0_0.5_0.5_0.25_epoch2
MinaMila
2025-06-16T04:59:51Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:58:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_045357
himedia
2025-06-16T04:59:05Z
0
0
null
[ "safetensors", "financial", "credit-rating", "korean", "gemma", "unsloth", "fine-tuned", "text-generation", "conversational", "ko", "base_model:unsloth/gemma-3-4b-it", "base_model:finetune:unsloth/gemma-3-4b-it", "license:apache-2.0", "region:us" ]
text-generation
2025-06-16T04:58:42Z
--- language: ko license: apache-2.0 base_model: unsloth/gemma-3-4b-it tags: - financial - credit-rating - korean - gemma - unsloth - fine-tuned model_name: FinCreditGemma-3-4B pipeline_tag: text-generation --- # FinCreditGemma-3-4B ## 모델 개요 FinCreditGemma-3-4B는 금융 신용 평가를 위해 특별히 설계된 한국어 언어 모델입니다. **베이스 모델**: unsloth/gemma-3-4b-it **데이터셋**: himedia/financial_dummy_data_v2 **학습 방법**: LoRA (Low-Rank Adaptation) **학습 일시**: 20250616_045357 ## 하이퍼파라미터 - **Learning Rate**: 5e-05 - **Max Steps**: 10 - **Batch Size**: 2 - **Gradient Accumulation**: 4 - **LoRA r**: 16 - **LoRA alpha**: 16 - **Max Sequence Length**: 2048 - **Warmup Steps**: 5 ## 사용 방법 ```python from transformers import AutoTokenizer, AutoModelForCausalLM # 모델과 토크나이저 로드 tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_045357") model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_045357") # 간단한 추론 예제 prompt = "고객의 신용등급을 평가해주세요:" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=200) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) ``` ## 레포지토리명 구성 ``` fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_045357 = fincredit-gemma3-4b-lr5e05-bs2-r16-steps10-20250616_045357 ``` - `fincredit-gemma3-4b`: 모델 기본명 - `lr5e05`: Learning Rate - `bs2`: Batch Size - `r16`: LoRA rank - `steps10`: 학습 스텝 - `20250616_045357`: 학습 시각 ## 성능 이 모델은 한국어 금융 텍스트에 대해 파인튜닝되어 신용 평가 관련 질의응답에 특화되어 있습니다. ## 라이선스 Apache 2.0
Khushi-Rao-Official-Viral-Video/Full.VIDEO.khushi.rao.Viral.Video.Tutorial.Official
Khushi-Rao-Official-Viral-Video
2025-06-16T04:58:39Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:58:22Z
Khushi Rao Viral video took the internet viewers on various Leaked social media platforms. Khushi Rao Video, a young and talented digital creator, recently became famous thanks to this interesting video. <a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Montecarlo2024/Qwen3-4B-Python-Combined-v2_GGUF
Montecarlo2024
2025-06-16T04:56:40Z
0
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "unsloth", "en", "dataset:Montecarlo2024/Python-Combined-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-15T22:29:09Z
--- base_model: unsloth/qwen3-4b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en datasets: - Montecarlo2024/Python-Combined-v2 --- # Uploaded model - **Developed by:** Montecarlo2024 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. - - - - System Prompt: You should include the following in your system prompt for this model - - SYSTEM """You are a helpful assistant. Respond directly to user questions without repeating the question or generating additional user prompts.""" _ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
FLOPS-Squared/KeystoneFuse-FW-G-16-Flax
FLOPS-Squared
2025-06-16T04:55:15Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:53:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shiny-Dixit-hot-sex-scene-in-web-series/Full.Video.shiny.Dixit.hot.sex.scene.in.web.series.Video
shiny-Dixit-hot-sex-scene-in-web-series
2025-06-16T04:52:59Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:52:41Z
01 seconds ago [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://infobal.com.ar/watch-full-video/?Bados.all) [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://infobal.com.ar/watch-full-video/?Bados.all) <a href="https://infobal.com.ar/watch-full-video/?Bados.all" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.25_0.5_epoch1
MinaMila
2025-06-16T04:51:40Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:49:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nieo23/emotion-analysis-with-distilbert
nieo23
2025-06-16T04:50:40Z
3
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-13T09:01:07Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: nieo23/emotion-analysis-with-distilbert results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nieo23/emotion-analysis-with-distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1386 - Validation Loss: 0.1496 - Train Accuracy: 0.9275 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': np.float32(5e-05), 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.3844 | 0.1858 | 0.9235 | 0 | | 0.1386 | 0.1496 | 0.9275 | 1 | ### Framework versions - Transformers 4.52.4 - TensorFlow 2.18.0 - Datasets 3.6.0 - Tokenizers 0.21.1
UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B
UNIVA-Bllossom
2025-06-16T04:48:05Z
146
59
transformers
[ "transformers", "safetensors", "text-generation", "ko", "en", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-70B", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-02-12T11:27:31Z
--- license: mit language: - ko - en base_model: - deepseek-ai/DeepSeek-R1-Distill-Llama-70B library_name: transformers pipeline_tag: text-generation --- # DeepSeek-llama3.3-Bllossom DeepSeek-Bllossom Series는 기존 DeepSeek-R1-Distill Series 모델의 language mixing, 다국어 성능 저하 문제를 해결하기 위해 추가로 학습된 모델입니다. DeepSeek-llama3.3-Bllossom-70B는 DeepSeek-R1-distill-Llama-70B 모델을 베이스로 구축된 모델로, 한국어 환경에서의 추론 성능 향상을 목표로 개발되었습니다. 본 모델은 UNIVA와 Bllossom팀이 합작으로 제작한 첫 번째 모델입니다. <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | DeepSeek-qwen-Bllossom-1.5B | [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | 공개예정 | | DeepSeek-qwen-Bllossom-7B | [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) | 공개예정 | | DeepSeek-llama3.1-Bllossom-8B | [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) | [🤗 HuggingFace](https://huggingface.co/UNIVA-Bllossom/DeepSeek-llama3.1-Bllossom-8B) | | DeepSeek-qwen-Bllossom-14B | [DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) | 공개예정 | | DeepSeek-qwen-Bllossom-32B | [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | [🤗 HuggingFace](https://huggingface.co/UNIVA-Bllossom/DeepSeek-qwen-Bllossom-32B) | | DeepSeek-llama3.3-Bllossom-70B | [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | [🤗 HuggingFace](https://huggingface.co/UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B ) | </div> ## 1. Introduction DeepSeek-llama3.3-Bllossom-70B는 DeepSeek-R1-distill-Llama-70B 모델을 베이스로 구축된 모델로, 기존 베이스 모델이 영어와 중국어 위주의 데이터로 학습된 한계를 극복하고자 개발되었습니다. 특히, 기존 DeepSeek-R1-distill-Llama-70B의 경우 한국어로 추론 시 모델 성능이 크게 하락하는 문제가 있었는데, DeepSeek-Bllossom은 이 문제를 해결하기 위해 내부 사고 과정은 영어로 수행하고 최종 사용자에게 제공되는 응답은 입력 언어에 따라 출력되도록 추가로 학습되었습니다. 이를 통해 한국어 환경에서의 추론 성능이 크게 개선되었습니다. 학습에는 한국어, 영어 reasoning 데이터를 사용하였으며, 기존 DeepSeek-R1 모델 학습에 주로 사용된 STEM 분야 데이터 외에도 다양한 분야의 데이터가 포함되었습니다. 데이터셋 설계와 모델 학습 과정에서 DeepSeek-llama3.3-Bllossom은 한국어 사용 환경에서 더 정확하고 신뢰할 수 있는 추론 결과를 제공하는 것을 주된 목표로 개발되었습니다. DeepSeek-Bllossom Series의 조금 더 작은 8B모델은 이곳에서 찾을 수 있습니다. [DeepSeek-R1-distill-Llama-Bllossom-8B](https://huggingface.co/UNIVA-Bllossom/DeepSeek-llama3.1-Bllossom-8B) --- ## 2. Post-training DeepSeek-llama3.3-Bllossom은 자체적으로 제작한 다양한 reasoning 데이터를 활용하여 post-training 과정을 진행하였습니다. 이 과정에서는 대규모 모델이 보유한 우수한 reasoning 능력과 한국어 처리 능력을 DeepSeek-R1-distill-Llama-70B 모델에 효과적으로 distillation하는 방법을 적용하였습니다. 이를 통해 기존 모델의 성능을 보완하고, 복합적인 추론 문제에 대해 더 정확하며 신뢰할 수 있는 응답을 생성할 수 있도록 최적화하였습니다. --- ## 3. inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained( "UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("UNIVA-Bllossom/DeepSeek-llama3.3-Bllossom-70B") system=''' You are a highly capable assistant. For every user question, follow these instructions exactly: 1. First, think through the problem step-by-step in English. Enclose all of your internal reasoning between <think> and </think> tags. This chain-of-thought should detail your reasoning process. 2. After the closing </think> tag, provide your final answer. 3. Do not include any additional text or commentary outside of this format. 4. Your output should strictly follow this structure: <think> [Your detailed step-by-step reasoning in English] </think> [Your final answer] ''' text="철수, 영희, 민수가 3회의 게임에서 점수를 받았습니다. 영희의 점수는 민수의 점수의 두 배이며, 민수의 점수는 철수의 4배입니다. 철수가 10점을 받았다면 이 3명의 평균 점수를 계산하세요." chat = [ {"role": "system", "content": system}, {"role": "user", "content": text} ] prompt=tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) model_inputs = tokenizer( prompt, return_tensors="pt", add_special_tokens=True ) if "token_type_ids" in model_inputs: del model_inputs["token_type_ids"] model_inputs = {k: v.to(model.device) for k, v in model_inputs.items()} generated_ids = model.generate( **model_inputs, max_new_tokens=8192, ) ``` ## 4. 모델 응답 비교 | 입력 (Prompt) | DeepSeek-R1-distill-Llama-70B | DeepSeek-llama3.3-Bllossom-70B | | ------------- | ----------------------------- | -------------------------- | | **Alice, Bob, Charlie가 3회의 게임에서 점수를 받았습니다.<br>Alice의 점수는 Bob의 점수의 두 배이며, Bob의 점수는 Charlie의 4배입니다.<br>Charlie가 10점을 받았다면 이 3명의 평균 점수를 계산하세요.** |<br>Charlie의 점수는 10점입니다.<br>Bob의 점수는 Charlie의 4배이므로, Bob의 점수는 <br>$$<br>4 \\times 10 = 40<br>$$<br>점입니다.<br>Alice의 점수는 Bob의 점수의 두 배이므로, Alice의 점수는 <br>$$<br>2 \\times 40 = 80<br>$$<br>점입니다.<br>세人的 총 점수는 <br>$$<br>10 + 40 + 80 = 130<br>$$<br>점입니다.<br>평균 점수는 <br>$$<br>\\frac{130}{3} \\approx 43.33$점입니다.<br>$$<br>\\boxed{43.33}<br>$$<br>Charlie의 점수는 10점입니다. | <br>Charlie의 점수는 10점으로 주어졌습니다. Bob의 점수는 Charlie의 점수의 4배이며, 이는 다음과 같이 계산됩니다:<br>$$<br>4 \times 10 = 40<br>$$<br>Alice의 점수는 Bob의 점수의 두 배이며, 이는 다음과 같습니다:<br>$$<br>2 \times 40 = 80<br>$$<br>세 사람의 총 점수는 다음과 같습니다:<br>$$<br>10 + 40 + 80 = 130<br>$$<br>이 세 사람의 평균 점수는 총 점수를 사람 수로 나누어 계산됩니다:<br>$$<br>\frac{130}{3}<br>$$<br>따라서, 이 세 사람의 평균 점수는 <br>$$<br>\boxed{\dfrac{130}{3}}<br>$$<br>입니다. | | **소수가 무한하다는 사실을 3가지 방법으로 증명하시오.** | <br>以下是用三种方法证明素数无限的详细步骤:<br>### 方法一:欧几里得的反证法<br>1. **假设**:假设素数是有限的,且所有素数为 $ p_1, p_2, p_3, \\ldots, p_n $。<br>2. **构造数**:构造一个新数 $ N = (p_1 \\times p_2 \\times p_3 \\times \\ldots \\times p_n) + 1 $。<br>3. **分析N的性质**:<br> - 如果N是素数,则N不在原假设的素数列表中,与假设矛盾。<br> - 如果N是合数,则存在某个素数 $ p $ 整除N。但由于N除以任何 $ p_i $ 都余1,故 $ p $ 不在原列表中,同样矛盾。<br>4. **结论**:因此,素数必须是无限的。<br>### 方法二:费马数的性质<br>1. **定义**:费马数形如 $ F_k = 2^{2^k} + 1 $,其中 $ k $ 为非负整数。<br>2. **假设**:假设素数是有限的,且所有素数为 $ p_1, p_2, \\ldots, p_n $。<br>3. **构造费马数**:对于每个 $ k $,构造 $ F_k $。如果 $ F_k $ 是素数,则不在原列表中;如果是合数,其素因数不在原列表中。<br>4. **结论**:因此,素数必须是无限的。<br>### 方法三:欧拉函数与素数无限性<br>1. **欧拉函数**:欧拉函数 $ \\phi(n) $ 表示小于 $ n $ 且与 $ n $ 互质的数的个数。<br>2. **假设**:假设素数有限,记为 $ p_1, p_2, \\ldots, p_n $。<br>3. **构造数**:考虑形如 $ N = p_1^{k_1} p_2^{k_2} \\ldots p_n^{k_n} $ 的数,其欧拉函数为 $ \\phi(N) = N \\prod_{i=1}^{n} \\left(1 - \\frac{1}{p_i}\\right) $。<br>4. **分析**:如果素数有限,$ \\prod_{p \\text{素数}} \\left(1 - \\frac{1}{p}\\right) $ 收敛,但实际上该积发散,导致矛盾。<br>5. **结论**:因此,素数必须是无限的。<br>这三种方法各自从不同的角度证明了素数的无限性,展示了数论中素数无限性的深刻性。 | <br>소수가 무한하다는 것을 3가지 방법으로 증명하기 위해, 우리는 다음과 같은 접근 방식을 사용할 수 있습니다:<br>1. **유클리드의 모순 증명**:<br> - **가정**: 소수는 유한한 개수 $ p_1, p_2, \\dots, p_n $이라고 가정합니다.<br> - **구성**: $ P = p_1 \\times p_2 \\times \\dots \\times p_n + 1 $으로 정의합니다.<br> - **결론**: $ P $가 소수이면, 그것은 목록에 없는 새로운 소수입니다. 만약 $ P $가 합성수이면, 그것은 목록에 없는 소수를 포함해야 합니다. 두 경우 모두 가정에 모순됩니다. 따라서, 소수는 무한합니다.<br>2. **팩토리얼 접근법**:<br> - **가정**: 소수는 유한한 개수 $ p_1, p_2, \\dots, p_n $이라고 가정합니다.<br> - **구성**: $ n! + 1 $을 고려하십시오, 여기서 $ n $은 가장 큰 소수입니다.<br> - **결론**: $ n! $은 모든 $ p_i $로 나누어 떨어지므로, $ n! + 1 $은 $ p_i $ 중 어느 것도 포함하지 않습니다. 만약 $ n! + 1 $이 소수이면, 그것은 $ n $보다 큰 새로운 소수입니다. 만약 $ n! + 1 $이 합성수이면, 그것은 $ n $보다 큰 소수를 포함해야 합니다. 두 경우 모두 가정에 모순됩니다. 따라서, 소수는 무한합니다.<br>3. **오일러의 제타 함수 증명**:<br> - **개념**: 오일러는 제타 함수 $ \\zeta(s) = \\sum_{k=1}^\\infty \\frac{1}{k^s} $가 소수에 대한 곱으로 표현될 수 있음을 보였습니다: $ \\zeta(s) = \\prod_{p \\text{ 소수}} \\frac{1}{1 - \\frac{1}{p^s}} $.<br> - **결론**: 만약 소수가 유한하면, 곱은 유한해야 할 것입니다. 그러나 $ s $가 1보다 큰 값을 향해 접근할 때, $ \\zeta(s) $는 무한대로 발산합니다. 이것은 곱이 발산해야 함을 요구하며, 따라서 소수는 무한해야 합니다.<br>각 방법은 소수가 무한하다는 것을 확증하며, 서로 다른 수학적 원칙에 기반을 둡니다. 유클리드의 증명은 모순을 사용합니다, 팩토리얼 접근법은 간단한 수론을 사용하며, 오일러의 증명은 해석적 수론의 도구를 사용합니다. 세 방법 모두 소수 무한성에 대한 이해를 강화합니다. | ## 5. Benchmark - dtype을 float16으로 추론을 진행하였습니다. - max_tokens: 32786 - temperature: 0.7 - 평가 방식: 각 벤치마크를 3회 반복 실행한 후 평균 점수를 산출하였습니다. - _en 벤치마크: 원본 벤치마크 질문을 그대로 사용하였습니다. - _ko 벤치마크: 원본 벤치마크 질문을 한국어로 고품질 번역하여 사용하였습니다. | Model | AIME24_ko | AIME24_en | MATH500_ko | MATH500_en | |---------------------------------------|-----------|-----------|------------|------------| | DeepSeek-R1-Distill-Llama-8B | 25.56 | 46.67 | 63.40 | 88.87 | | DeepSeek-llama3.1-Bllossom-8B | 36.67 | 40.00 | 78.07 | 87.80 | | DeepSeek-R1-Distill-Llama-70B | 58.89 | 70.00 | 88.53 | 93.73 | | DeepSeek-llama3.3-Bllossom-70B | 62.22 | 65.56 | 88.40 | 93.33 | ## 6. License This code repository and the model weights are licensed under the MIT License. DeepSeek-Bllossom series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - **DeepSeek-R1-Distill-Llama-70B** is derived from Llama3.3-70B-Instruct and is originally licensed under llama3.3 license. - **DeepSeek-llama3.3-Bllossom-70B** is derived from DeepSeek-R1-Distill-Llama-70B and is originally licensed under llama3.3 license. ## 7. Contributor - **UNIVA AI Team** ([UNIVA](https://univa.co.kr), Main contributor) - 최창수 (서울과학기술대학교, [MLP연구실](https://sites.google.com/view/aailab) 석사과정) - **임경태** (KAIST, [MLP연구실](https://sites.google.com/view/aailab) 교수) ## 8. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.25_0.75_epoch2
MinaMila
2025-06-16T04:44:44Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:42:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KevinG/Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_500_no_KL
KevinG
2025-06-16T04:43:30Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:09:22Z
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct library_name: transformers model_name: Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_500_no_KL tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_500_no_KL This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="KevinG/Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_500_no_KL", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sleeepeer-penn-state/huggingface/runs/oushi1xm) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Z841973620/Qwen3-30B-A3B-IQ2_XS-GGUF
Z841973620
2025-06-16T04:38:45Z
0
0
null
[ "gguf", "base_model:huihui-ai/Qwen3-30B-A3B-abliterated", "base_model:quantized:huihui-ai/Qwen3-30B-A3B-abliterated", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-06-16T04:02:03Z
--- base_model: - huihui-ai/Qwen3-30B-A3B-abliterated ---
proyrb/ppo-LunarLander-v2
proyrb
2025-06-16T04:35:43Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2025-06-16T03:02:53Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -56.14 +/- 76.83 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. ## Evaluation Results - Mean Reward: -56.14 ± 76.83 - Number of Evaluation Episodes: 10 ## Hyperparameters ```python {'env_id': 'LunarLander-v2' 'total_timesteps': 100000 'learning_rate': 0.0003 'num_envs': 8 'num_steps': 2048 'update_epochs': 10 'num_minibatches': 32 'clip_coef': 0.5 'seed': 136 'repo_id': 'proyrb/ppo-LunarLander-v2' 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'norm_adv': True 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'batch_size': 16384 'minibatch_size': 512}
FormlessAI/db2f65e3-3813-44fb-8af6-258eb0046876
FormlessAI
2025-06-16T04:33:13Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "base_model:finetune:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "endpoints_compatible", "region:us" ]
null
2025-06-15T21:42:54Z
--- base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO library_name: transformers model_name: db2f65e3-3813-44fb-8af6-258eb0046876 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for db2f65e3-3813-44fb-8af6-258eb0046876 This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="FormlessAI/db2f65e3-3813-44fb-8af6-258eb0046876", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/nibb0cx5) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.0+cu128 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gradientrouting-spar/standard_1_proxy_ntrain_25_ntrig_9_random_3x3_seed_1_seed_25_20250616_042127
gradientrouting-spar
2025-06-16T04:31:50Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:31:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Henil1/vit-axavision-2-ChestX-BioGPT-v8
Henil1
2025-06-16T04:31:02Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:31:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Xem-Phim-Gai-Xinh-Hang-Mup-Nhanh-Nhat/Xem.Phim.Sex.Gai.Xinh.Hang.Mup.Nhanh.Nhat
Xem-Phim-Gai-Xinh-Hang-Mup-Nhanh-Nhat
2025-06-16T04:29:33Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:29:23Z
01 seconds ago [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://infobal.com.ar/watch-full-video/?Bados.all) [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://infobal.com.ar/watch-full-video/?Bados.all) <a href="https://infobal.com.ar/watch-full-video/?Bados.all" rel="nofollow" data-target="animated-image.originalLink"><img src="https://media.post.rvohealth.io/wp-content/uploads/2024/02/video-overlay-bed-sex-couple-original.jpg" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
KevinG/Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_100_no_KL
KevinG
2025-06-16T04:29:14Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:08:53Z
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct library_name: transformers model_name: Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_100_no_KL tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_100_no_KL This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="KevinG/Meta-Llama-3-8B-Instruct-GRPO-alpaca_naive_100_no_KL", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sleeepeer-penn-state/huggingface/runs/09d6hqlt) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
abdullahalatabee/geometric-shapes-generator
abdullahalatabee
2025-06-16T04:28:52Z
4
1
peft
[ "peft", "diffusers", "safetensors", "arxiv:1910.09700", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "region:us" ]
null
2025-06-09T23:02:28Z
--- base_model: runwayml/stable-diffusion-v1-5 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
sicak-sex-turkiye/sicak.sex.turkiye.porno.videolari.xnxx
sicak-sex-turkiye
2025-06-16T04:28:48Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:28:38Z
[![image/gif](https://cdn-uploads.huggingface.co/production/uploads/683d278851706d12b2cbc4eb/OMYmxOdS-sy4ZshNCnNav.gif)](https://t.co/P8Ex9FtH0g)
louzongzhi/CosmoGPT
louzongzhi
2025-06-16T04:28:36Z
0
0
null
[ "license:cc-by-nc-nd-4.0", "region:us" ]
null
2025-06-16T04:28:36Z
--- license: cc-by-nc-nd-4.0 ---
mrk91s/qbox-coder-output
mrk91s
2025-06-16T04:28:09Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:deepseek-ai/deepseek-coder-1.3b-instruct", "base_model:finetune:deepseek-ai/deepseek-coder-1.3b-instruct", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:28:06Z
--- base_model: deepseek-ai/deepseek-coder-1.3b-instruct library_name: transformers model_name: qbox-coder-output tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qbox-coder-output This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mrk91s/qbox-coder-output", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.18.2 - Transformers: 4.52.4 - Pytorch: 2.6.0+cu124 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mezzo-fun-18-video/FULL.VIDEOs.Mezzo.Fun.Viral.Video.Tutorial.Official
mezzo-fun-18-video
2025-06-16T04:26:05Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:25:45Z
<a href="https://tinyurl.com/Sapna-News?fkisreal" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Edcastro/tinyllama-edcastr_JQuery-v1
Edcastro
2025-06-16T04:25:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:24:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Henil1/vit-axavision-2-ChestX-v2
Henil1
2025-06-16T04:24:48Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:24:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.5_0.05_epoch1
MinaMila
2025-06-16T04:24:26Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:22:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VIDEOS-two-wolf-one-girl-Viral-Video/CLIP.VIDEO.two.wolf.one.girl.Video.Tutorial.Official
VIDEOS-two-wolf-one-girl-Viral-Video
2025-06-16T04:24:16Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:20:01Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/new-leaked-video/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Henil1/vit-axavision-2-ChestX-BioGPT
Henil1
2025-06-16T04:24:09Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:24:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Henil1/vit-axavision-2-ChestX
Henil1
2025-06-16T04:24:06Z
8
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "image-captioning", "vision-language", "vit-gpt2", "chest-xray", "healthcare", "axamine", "finetuned", "nlpconnect/vit-gpt2-image-captioning", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-06-14T18:50:44Z
--- library_name: transformers tags: - image-captioning - vision-language - vit-gpt2 - chest-xray - healthcare - axamine - finetuned - nlpconnect/vit-gpt2-image-captioning --- # Vit-Axavision-2-ChestX 🩺 This model is a fine-tuned version of [`nlpconnect/vit-gpt2-image-captioning`](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning) on a chest X-ray dataset. It is developed as part of the Axamine AI research efforts to explore medical vision-language applications. The model takes chest X-ray images as input and generates descriptive captions that may help in automated reporting, healthcare research, or AI-assisted diagnostics. --- ## Model Details - **Base model:** nlpconnect/vit-gpt2-image-captioning - **Architecture:** VisionEncoderDecoderModel (ViT encoder + GPT2 decoder) - **Fine-tuned on dataset:** [Shrey-1329/cxiu_hf_dataset](https://huggingface.co/datasets/Shrey-1329/cxiu_hf_dataset) - **Model size:** ~250M parameters - **Developed by:** Henilsinh Raj (Axamine AI) --- ## Use Cases ### Intended Use - Chest X-ray image captioning - Healthcare research - Medical AI experiments - Educational purposes ### Limitations - This model does **not** provide medical diagnosis. - Captions are purely descriptive and may not fully reflect clinical accuracy. --- ## Usage Here’s how you can use the model for inference: ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer from PIL import Image import torch import requests # Load model model_id = "Henil1/vit-axavision-2-ChestX" model = VisionEncoderDecoderModel.from_pretrained(model_id) feature_extractor = ViTImageProcessor.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) # Preprocess image image = Image.open("your_image_path.jpg").convert("RGB") pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values.to(device) # Generate caption output_ids = model.generate(pixel_values, max_length=64, num_beams=4) caption = tokenizer.decode(output_ids[0], skip_special_tokens=True) print("Generated caption:", caption) ``` --- ## Citation If you use this model, please cite: ```bibtex @misc{henil2025axavision, author = {Henilsinh Raj}, title = {Vit-Axavision-2-ChestX: Vision-Language Model for Chest X-Ray Captioning}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/Henil1/vit-axavision-2-ChestX} }
mjpreneur/scalewithai
mjpreneur
2025-06-16T04:23:09Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-06-16T03:53:56Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Scalewithai <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/mjpreneur/scalewithai/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('mjpreneur/scalewithai', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/mjpreneur/scalewithai/discussions) to add images that show off what you’ve made with this LoRA.
gradientrouting-spar/standard_1_proxy_ntrain_25_ntrig_9_random_3x3_seed_1_20250616_041059
gradientrouting-spar
2025-06-16T04:21:20Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:21:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Minahil-Malik-Official-Viral-Video/FULL.VIDEO.Minahil.Malik.Viral.Video.Tutorial.Official
Minahil-Malik-Official-Viral-Video
2025-06-16T04:20:55Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:20:35Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
gradientrouting-spar/mc14_badmed_kl_div_dsd-42_msd-42_beta_kl-3_seed_1
gradientrouting-spar
2025-06-16T04:19:00Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:18:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/mc14_badmed_kl_div_dsd-42_msd-42_beta_kl-3_seed_1_epoch_1
gradientrouting-spar
2025-06-16T04:18:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:18:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EJAustin/vit-base-patch16-224-in21k-finetuned-lora-food101
EJAustin
2025-06-16T04:15:32Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:07:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Cnam-LMSSC/mimi_throat_microphone
Cnam-LMSSC
2025-06-16T04:14:27Z
158
0
transformers
[ "transformers", "safetensors", "mimi", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2025-06-03T01:29:56Z
--- library_name: transformers tags: [] --- ## Inference script : ```python import torch, torchaudio from datasets import load_dataset from moshi.models import loaders weight_path = loaders.hf_hub_download("Cnam-LMSSC/mimi_throat_microphone", "kyutai_implementation.safetensors") model = loaders.get_mimi(weight_path).eval() model.set_num_codebooks(model.total_codebooks) # use all codebooks test_dataset = load_dataset("Cnam-LMSSC/vibravox", "speech_clean", split="test", streaming=True) audio_48kHz = torch.Tensor(next(iter(test_dataset))["audio.throat_microphone"]["array"]) audio_24kHz = torchaudio.functional.resample(audio_48kHz, orig_freq=48_000, new_freq=24_000) enhanced_audio_24kHz = model.decode(model.encode(audio_24kHz[None, None, :])) ``` For streaming usage, please refer to this [script](https://github.com/kyutai-labs/moshi/blob/main/scripts/mimi_streaming_test.py)
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.15_epoch2
MinaMila
2025-06-16T04:12:32Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T04:10:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/standard_1_proxy_ntrain_25_ntrig_9_animals_3x3_seed_1_seed_25_seed_2_seed_42_20250616_040025
gradientrouting-spar
2025-06-16T04:10:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T04:10:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Video-Billie-Eilish-mirror-photo-strap/Billie.Eilish.mirror.photo.strap.Viral.Video.Tutorial.Official
Video-Billie-Eilish-mirror-photo-strap
2025-06-16T04:10:42Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:10:26Z
<a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
researchsocaai/gen-sql-8b-thinking-awq
researchsocaai
2025-06-16T04:10:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:researchsocaai/gen-sql-8b-thinking-awq", "base_model:quantized:researchsocaai/gen-sql-8b-thinking-awq", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-06-16T04:08:33Z
--- base_model: dwikitheduck/gen-sql-8B-thinking-awq tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** dwikitheduck - **License:** apache-2.0 - **Finetuned from model :** dwikitheduck/gen-sql-8B-thinking-awq This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
New-tutorial-Hareem-Shah-viral-video/FULL.VIDEO.Hareem.Shah.Viral.Video.Tutorial.Official
New-tutorial-Hareem-Shah-viral-video
2025-06-16T04:10:08Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:09:51Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
original-shruthi-narayanan-viral-video/wATCH.shruthi.narayanan.viral.video.original
original-shruthi-narayanan-viral-video
2025-06-16T04:05:25Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:05:09Z
[![image/gif](https://cdn-uploads.huggingface.co/production/uploads/683d278851706d12b2cbc4eb/OMYmxOdS-sy4ZshNCnNav.gif)](https://t.co/P8Ex9FtH0g)
18-shruthi-narayanan-viral-video/wATCH.shruthi.narayanan.viral.video.original.link
18-shruthi-narayanan-viral-video
2025-06-16T04:04:00Z
0
0
null
[ "region:us" ]
null
2025-06-16T04:03:52Z
[![image/gif](https://cdn-uploads.huggingface.co/production/uploads/683d278851706d12b2cbc4eb/OMYmxOdS-sy4ZshNCnNav.gif)](https://t.co/P8Ex9FtH0g)
seregadgl/sts_v11
seregadgl
2025-06-16T04:02:47Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:111476", "loss:CosineSimilarityLoss", "dataset:seregadgl/data_cross_gpt_139k", "arxiv:1908.10084", "base_model:sergeyzh/LaBSE-ru-sts", "base_model:finetune:sergeyzh/LaBSE-ru-sts", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-06-16T04:02:21Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:111476 - loss:CosineSimilarityLoss base_model: sergeyzh/LaBSE-ru-sts widget: - source_sentence: 'трюковый самокат plank 180 белый ' sentences: - смарт-телевизор 75 sony kd-75x950h - самокат для трюков плэнк 1.80 м белый - xiaomi mi 11 8gb 128gb - source_sentence: 'вейп vaporesso xros ' sentences: - садовая ограда классика 4 2 м белый - кухонные весы - электронная сигарета voopoo drag - source_sentence: серьги l atelier precieux 1628710 sentences: - фильтр hepa для пылесоса варис st400 - потолочная люстра майтон nostalgia ceiling chandelier mod048pl-06g - серьги atelier de bijoux 1628712 - source_sentence: 'мобильный геймпад триггерами x2 ' sentences: - электроскутер nitro pro milano 750w led - наушники без проводов мейзу ep52 lite - геймпад с функцией триггеров x2 - source_sentence: комод 7 рисунком машинки 4 ящика sentences: - удлинитель far f 505 d lara выключателем 2 0м - беззеркальный фотоаппарат nikon z50 kit 16-50mm ilce-7cl красный - комод 8 с изображением супергероев 6 ящиков datasets: - seregadgl/data_cross_gpt_139k pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy - cosine_accuracy_threshold - cosine_f1 - cosine_f1_threshold - cosine_precision - cosine_recall - cosine_ap - cosine_mcc model-index: - name: SentenceTransformer based on sergeyzh/LaBSE-ru-sts results: - task: type: binary-classification name: Binary Classification dataset: name: eval type: eval metrics: - type: cosine_accuracy value: 0.9722640832436311 name: Cosine Accuracy - type: cosine_accuracy_threshold value: 0.630459189414978 name: Cosine Accuracy Threshold - type: cosine_f1 value: 0.9724366041896361 name: Cosine F1 - type: cosine_f1_threshold value: 0.5821653008460999 name: Cosine F1 Threshold - type: cosine_precision value: 0.9647847565278758 name: Cosine Precision - type: cosine_recall value: 0.9802107980210798 name: Cosine Recall - type: cosine_ap value: 0.9945729266353226 name: Cosine Ap - type: cosine_mcc value: 0.9445047865635516 name: Cosine Mcc --- # SentenceTransformer based on sergeyzh/LaBSE-ru-sts This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sergeyzh/LaBSE-ru-sts](https://huggingface.co/sergeyzh/LaBSE-ru-sts) on the [data_cross_gpt_139k](https://huggingface.co/datasets/seregadgl/data_cross_gpt_139k) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sergeyzh/LaBSE-ru-sts](https://huggingface.co/sergeyzh/LaBSE-ru-sts) <!-- at revision 00c333ce29c9ad739f48baca9a578cd1e85094d4 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [data_cross_gpt_139k](https://huggingface.co/datasets/seregadgl/data_cross_gpt_139k) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("seregadgl/sts_v11") # Run inference sentences = [ 'комод 7 рисунком машинки 4 ящика', 'комод 8 с изображением супергероев 6 ящиков', 'беззеркальный фотоаппарат nikon z50 kit 16-50mm ilce-7cl красный', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Binary Classification * Dataset: `eval` * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator) | Metric | Value | |:--------------------------|:-----------| | cosine_accuracy | 0.9723 | | cosine_accuracy_threshold | 0.6305 | | cosine_f1 | 0.9724 | | cosine_f1_threshold | 0.5822 | | cosine_precision | 0.9648 | | cosine_recall | 0.9802 | | **cosine_ap** | **0.9946** | | cosine_mcc | 0.9445 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### data_cross_gpt_139k * Dataset: [data_cross_gpt_139k](https://huggingface.co/datasets/seregadgl/data_cross_gpt_139k) at [9e1f5ca](https://huggingface.co/datasets/seregadgl/data_cross_gpt_139k/tree/9e1f5ca30088e6f61ca5b9a742b38ef2c4fc7f3e) * Size: 111,476 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 3 tokens</li><li>mean: 14.84 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 15.64 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | label | |:-------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|:-----------------| | <code>нож кухонный 21см синий</code> | <code>кухонный нож 22см зелёный</code> | <code>0.0</code> | | <code>блок питания универсальный для мерцающих флэш гирлянд rich led бахрома занавес нить белый</code> | <code>адаптер питания для мигающих led гирлянд "luminous decor" бахрома занавес нить зелёный</code> | <code>0.0</code> | | <code>защитная пленка для apple iphone 6 прозрачная </code> | <code>protective film for apple iphone 6 transparent</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### data_cross_gpt_139k * Dataset: [data_cross_gpt_139k](https://huggingface.co/datasets/seregadgl/data_cross_gpt_139k) at [9e1f5ca](https://huggingface.co/datasets/seregadgl/data_cross_gpt_139k/tree/9e1f5ca30088e6f61ca5b9a742b38ef2c4fc7f3e) * Size: 27,870 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 3 tokens</li><li>mean: 15.05 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 15.57 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.48</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | label | |:---------------------------------------------------------------------------------|:------------------------------------------------------------------------|:-----------------| | <code>сумка дорожная складная полет оранжевая bradex td 0599 </code> | <code>сумка для путешествий складная брадекс orange</code> | <code>1.0</code> | | <code>наушники sennheiser hd 450bt белый </code> | <code>наушники сенхайзер hd 450bt white</code> | <code>1.0</code> | | <code>перчатки stg al-05-1871 синие серые черные зеленыеполноразмерные xl</code> | <code>перчатки stg al-05-1871 blue gray black green full size xl</code> | <code>1.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 4.7459131195420915e-05 - `weight_decay`: 0.03196240090522689 - `num_train_epochs`: 2 - `warmup_ratio`: 0.014344463935915175 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 4.7459131195420915e-05 - `weight_decay`: 0.03196240090522689 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.014344463935915175 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | eval_cosine_ap | |:------:|:----:|:-------------:|:---------------:|:--------------:| | 0.0287 | 100 | 0.189 | - | - | | 0.0574 | 200 | 0.0695 | - | - | | 0.0861 | 300 | 0.067 | - | - | | 0.1148 | 400 | 0.0643 | - | - | | 0.1435 | 500 | 0.0594 | 0.0549 | 0.9862 | | 0.1722 | 600 | 0.0565 | - | - | | 0.2009 | 700 | 0.0535 | - | - | | 0.2296 | 800 | 0.0506 | - | - | | 0.2583 | 900 | 0.0549 | - | - | | 0.2870 | 1000 | 0.0535 | 0.0451 | 0.9888 | | 0.3157 | 1100 | 0.0492 | - | - | | 0.3444 | 1200 | 0.0499 | - | - | | 0.3731 | 1300 | 0.0486 | - | - | | 0.4018 | 1400 | 0.0458 | - | - | | 0.4305 | 1500 | 0.0458 | 0.0419 | 0.9877 | | 0.4592 | 1600 | 0.0502 | - | - | | 0.4879 | 1700 | 0.045 | - | - | | 0.5166 | 1800 | 0.0435 | - | - | | 0.5454 | 1900 | 0.0426 | - | - | | 0.5741 | 2000 | 0.0422 | 0.0386 | 0.9906 | | 0.6028 | 2100 | 0.0436 | - | - | | 0.6315 | 2200 | 0.043 | - | - | | 0.6602 | 2300 | 0.0432 | - | - | | 0.6889 | 2400 | 0.0397 | - | - | | 0.7176 | 2500 | 0.0394 | 0.0357 | 0.9903 | | 0.7463 | 2600 | 0.039 | - | - | | 0.7750 | 2700 | 0.0398 | - | - | | 0.8037 | 2800 | 0.0394 | - | - | | 0.8324 | 2900 | 0.0426 | - | - | | 0.8611 | 3000 | 0.0345 | 0.0341 | 0.9921 | | 0.8898 | 3100 | 0.0361 | - | - | | 0.9185 | 3200 | 0.0365 | - | - | | 0.9472 | 3300 | 0.0401 | - | - | | 0.9759 | 3400 | 0.0391 | - | - | | 1.0046 | 3500 | 0.0342 | 0.0310 | 0.9928 | | 1.0333 | 3600 | 0.0267 | - | - | | 1.0620 | 3700 | 0.0264 | - | - | | 1.0907 | 3800 | 0.0263 | - | - | | 1.1194 | 3900 | 0.0248 | - | - | | 1.1481 | 4000 | 0.0282 | 0.0301 | 0.9928 | | 1.1768 | 4100 | 0.0279 | - | - | | 1.2055 | 4200 | 0.0258 | - | - | | 1.2342 | 4300 | 0.0248 | - | - | | 1.2629 | 4400 | 0.0289 | - | - | | 1.2916 | 4500 | 0.0261 | 0.0291 | 0.9935 | | 1.3203 | 4600 | 0.0262 | - | - | | 1.3490 | 4700 | 0.0276 | - | - | | 1.3777 | 4800 | 0.0256 | - | - | | 1.4064 | 4900 | 0.0272 | - | - | | 1.4351 | 5000 | 0.0283 | 0.0284 | 0.9939 | | 1.4638 | 5100 | 0.0254 | - | - | | 1.4925 | 5200 | 0.0252 | - | - | | 1.5212 | 5300 | 0.0234 | - | - | | 1.5499 | 5400 | 0.0228 | - | - | | 1.5786 | 5500 | 0.0248 | 0.0277 | 0.9941 | | 1.6073 | 5600 | 0.024 | - | - | | 1.6361 | 5700 | 0.0225 | - | - | | 1.6648 | 5800 | 0.0234 | - | - | | 1.6935 | 5900 | 0.0226 | - | - | | 1.7222 | 6000 | 0.0248 | 0.0265 | 0.9942 | | 1.7509 | 6100 | 0.0247 | - | - | | 1.7796 | 6200 | 0.0219 | - | - | | 1.8083 | 6300 | 0.026 | - | - | | 1.8370 | 6400 | 0.0209 | - | - | | 1.8657 | 6500 | 0.0252 | 0.0262 | 0.9945 | | 1.8944 | 6600 | 0.0218 | - | - | | 1.9231 | 6700 | 0.0223 | - | - | | 1.9518 | 6800 | 0.0228 | - | - | | 1.9805 | 6900 | 0.0242 | - | - | | 2.0 | 6968 | - | 0.0257 | 0.9946 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.5.2 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
sonnykoalu/xdf
sonnykoalu
2025-06-16T04:02:41Z
0
0
null
[ "license:other", "region:us" ]
null
2025-06-16T04:01:45Z
--- license: other license_name: none license_link: LICENSE ---
Assamese-Viral-Video/VIDEO.Assamese.Viral.Video.Tutorial.Official
Assamese-Viral-Video
2025-06-16T04:01:54Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:54:50Z
<a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Sapna-Shah-Indians-viral-video/wATCH.Sapna.Shah.Nulook.India.viral.video
Sapna-Shah-Indians-viral-video
2025-06-16T03:58:43Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:58:34Z
[![image/gif](https://cdn-uploads.huggingface.co/production/uploads/683d278851706d12b2cbc4eb/OMYmxOdS-sy4ZshNCnNav.gif)](https://t.co/P8Ex9FtH0g)
Indians-viral-video/wATCH.Sapna.Shah.Nulook.India.viral.video.original.Trending
Indians-viral-video
2025-06-16T03:57:15Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:56:37Z
[![image/gif](https://cdn-uploads.huggingface.co/production/uploads/683d278851706d12b2cbc4eb/OMYmxOdS-sy4ZshNCnNav.gif)](https://t.co/P8Ex9FtH0g)
MinaMila/phi3_unlearned_2nd_5e-7_1.0_0.25_0.5_0.25_epoch1
MinaMila
2025-06-16T03:57:08Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T03:55:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
18-maryam-faisal-Leaked-Viral-video/TRENDING.maryam.faisal.Viral.Video.Link.Tutorial
18-maryam-faisal-Leaked-Viral-video
2025-06-16T03:54:29Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:54:17Z
01 seconds ago [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://infobal.com.ar/watch-full-video/?Bados.all) [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://infobal.com.ar/watch-full-video/?Bados.all) <a href="https://infobal.com.ar/watch-full-video/?Bados.all" rel="nofollow" data-target="animated-image.originalLink"><img src="https://media.post.rvohealth.io/wp-content/uploads/2024/02/video-overlay-bed-sex-couple-original.jpg" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
erdem-erdem/Qwen2.5-3B-Instruct-countdown-ps-grpo-r32
erdem-erdem
2025-06-16T03:53:58Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T03:52:24Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** erdem-erdem - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sil-ai/madlad400-finetuned-onr_x_wol-tpi
sil-ai
2025-06-16T03:53:38Z
7
0
peft
[ "peft", "safetensors", "generated_from_trainer", "translation", "onr", "tpi", "base_model:jbochi/madlad400-3b-mt", "base_model:adapter:jbochi/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
translation
2025-06-11T00:20:59Z
--- base_model: jbochi/madlad400-3b-mt library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: madlad400-finetuned-onr_x_wol-tpi results: [] language: - onr - tpi model_type: Translation pipeline_tag: translation --- # madlad400-finetuned-onr_x_wol-tpi This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Onnele, Wolwale to Tok Pisin. ## Model details - **Developed by:** SIL Global - **Finetuned from model:** jbochi/madlad400-3b-mt - **Model type:** Translation - **Source language:** Onnele, Wolwale (`onr`) - **Target language:** Tok Pisin (`tpi`) - **License:** closed/private ## Datasets The model was trained on a parallel corpus of plain text files: Onnele, Wolwale: - Onnele, Wolwale Scriptures - License: All rights reserved, Wycliffe Bible Translators. Used with permission. Tok Pisin: - Tok Pisin back-translation - License: All rights reserved, Wycliffe Bible Translators. Used with permission. ## Usage You can use this model with the `transformers` library like this: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-onr_x_wol-tpi") model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-onr_x_wol-tpi") inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` # madlad400-finetuned-onr_x_wol-tpi This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1339 - Chrf: 84.1544 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chrf | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2296 | 7.7341 | 1600 | 0.1639 | 81.9949 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu124 - Datasets 2.21.0 - Tokenizers 0.19.1
sil-ai/madlad400-finetuned-onr_x_rbr-tpi
sil-ai
2025-06-16T03:53:37Z
9
0
peft
[ "peft", "safetensors", "generated_from_trainer", "translation", "onr", "tpi", "base_model:jbochi/madlad400-3b-mt", "base_model:adapter:jbochi/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
translation
2025-06-10T23:21:25Z
--- base_model: jbochi/madlad400-3b-mt library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: madlad400-finetuned-onr_x_rbr-tpi results: [] language: - onr - tpi model_type: Translation pipeline_tag: translation --- # madlad400-finetuned-onr_x_rbr-tpi This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Onnele, Romei-Barera to Tok Pisin. ## Model details - **Developed by:** SIL Global - **Finetuned from model:** jbochi/madlad400-3b-mt - **Model type:** Translation - **Source language:** Onnele, Romei-Barera (`onr`) - **Target language:** Tok Pisin (`tpi`) - **License:** closed/private ## Datasets The model was trained on a parallel corpus of plain text files: Onnele, Romei-Barera: - Onnele, Romei-Barera Scriptures - License: All rights reserved, Wycliffe Bible Translators. Used with permission. Tok Pisin: - Tok Pisin back-translation - License: All rights reserved, Wycliffe Bible Translators. Used with permission. ## Usage You can use this model with the `transformers` library like this: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-onr_x_rbr-tpi") model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-onr_x_rbr-tpi") inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` # madlad400-finetuned-onr_x_rbr-tpi This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1796 - Chrf: 79.0591 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chrf | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2929 | 7.7435 | 1600 | 0.2140 | 76.7801 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu124 - Datasets 2.21.0 - Tokenizers 0.19.1
sil-ai/madlad400-finetuned-onr_x_goi-tpi
sil-ai
2025-06-16T03:53:36Z
8
0
peft
[ "peft", "safetensors", "generated_from_trainer", "translation", "onr", "tpi", "base_model:jbochi/madlad400-3b-mt", "base_model:adapter:jbochi/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
translation
2025-06-10T23:22:35Z
--- base_model: jbochi/madlad400-3b-mt library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: madlad400-finetuned-onr_x_goi-tpi results: [] language: - onr - tpi model_type: Translation pipeline_tag: translation --- # madlad400-finetuned-onr_x_goi-tpi This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Onnele, Goiniri to Tok Pisin. ## Model details - **Developed by:** SIL Global - **Finetuned from model:** jbochi/madlad400-3b-mt - **Model type:** Translation - **Source language:** Onnele, Goiniri (`onr`) - **Target language:** Tok Pisin (`tpi`) - **License:** closed/private ## Datasets The model was trained on a parallel corpus of plain text files: Onnele, Goiniri: - Onnele, Goiniri Scriptures - License: All rights reserved, Wycliffe Bible Translators. Used with permission. Tok Pisin: - Tok Pisin back-translation - License: All rights reserved, Wycliffe Bible Translators. Used with permission. ## Usage You can use this model with the `transformers` library like this: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-onr_x_goi-tpi") model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-onr_x_goi-tpi") inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` # madlad400-finetuned-onr_x_goi-tpi This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1157 - Chrf: 85.9775 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chrf | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.197 | 7.7295 | 1600 | 0.1436 | 83.9241 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu124 - Datasets 2.21.0 - Tokenizers 0.19.1
sil-ai/madlad400-finetuned-uni-tpi
sil-ai
2025-06-16T03:53:34Z
110
0
peft
[ "peft", "safetensors", "generated_from_trainer", "translation", "uni", "tpi", "base_model:jbochi/madlad400-3b-mt", "base_model:adapter:jbochi/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
translation
2025-06-05T03:56:50Z
--- base_model: jbochi/madlad400-3b-mt library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: madlad400-finetuned-uni-tpi results: [] language: - uni - tpi model_type: Translation pipeline_tag: translation --- # madlad400-finetuned-uni-tpi This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Ramo to Tok Pisin. ## Model details - **Developed by:** SIL Global - **Finetuned from model:** jbochi/madlad400-3b-mt - **Model type:** Translation - **Source language:** Ramo (`uni`) - **Target language:** Tok Pisin (`tpi`) - **License:** closed/private ## Datasets The model was trained on a parallel corpus of plain text files: Ramo: - Ramo Scriptures - License: All rights reserved, Wycliffe Bible Translators. Used with permission. Tok Pisin: - Tok Pisin back-translation - License: All rights reserved, Wycliffe Bible Translators. Used with permission. ## Usage You can use this model with the `transformers` library like this: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-uni-tpi") model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-uni-tpi") inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` # madlad400-finetuned-uni-tpi This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1777 - Chrf: 78.0127 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chrf | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2819 | 7.7482 | 1600 | 0.2113 | 75.2590 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu124 - Datasets 2.21.0 - Tokenizers 0.19.1
sil-ai/madlad400-finetuned-bpe_x_bar-tpi
sil-ai
2025-06-16T03:53:32Z
14
0
peft
[ "peft", "safetensors", "generated_from_trainer", "translation", "bpe", "tpi", "base_model:jbochi/madlad400-3b-mt", "base_model:adapter:jbochi/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
translation
2025-06-06T19:20:16Z
--- base_model: jbochi/madlad400-3b-mt library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: madlad400-finetuned-bpe_x_bar-tpi results: [] language: - bpe - tpi model_type: Translation pipeline_tag: translation --- # madlad400-finetuned-bpe_x_bar-tpi This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Bauni, Barupu to Tok Pisin. ## Model details - **Developed by:** SIL Global - **Finetuned from model:** jbochi/madlad400-3b-mt - **Model type:** Translation - **Source language:** Bauni, Barupu (`bpe`) - **Target language:** Tok Pisin (`tpi`) - **License:** closed/private ## Datasets The model was trained on a parallel corpus of plain text files: Bauni, Barupu: - Bauni, Barupu Scriptures - License: All rights reserved, Wycliffe Bible Translators. Used with permission. Tok Pisin: - Tok Pisin back-translation - License: All rights reserved, Wycliffe Bible Translators. Used with permission. ## Usage You can use this model with the `transformers` library like this: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-bpe_x_bar-tpi") model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-bpe_x_bar-tpi") inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` # madlad400-finetuned-bpe_x_bar-tpi This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1551 - Chrf: 80.7709 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chrf | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.2676 | 7.6923 | 1600 | 0.1899 | 78.1510 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu124 - Datasets 2.21.0 - Tokenizers 0.19.1
sil-ai/madlad400-finetuned-sso-tpi
sil-ai
2025-06-16T03:53:31Z
11
0
peft
[ "peft", "safetensors", "generated_from_trainer", "translation", "sso", "tpi", "base_model:jbochi/madlad400-3b-mt", "base_model:adapter:jbochi/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
translation
2025-06-04T14:10:17Z
--- base_model: jbochi/madlad400-3b-mt library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: madlad400-finetuned-sso-tpi results: [] language: - sso - tpi model_type: Translation pipeline_tag: translation --- # madlad400-finetuned-sso-tpi This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Sissono to Tok Pisin. ## Model details - **Developed by:** SIL Global - **Finetuned from model:** jbochi/madlad400-3b-mt - **Model type:** Translation - **Source language:** Sissono (`sso`) - **Target language:** Tok Pisin (`tpi`) - **License:** closed/private ## Datasets The model was trained on a parallel corpus of plain text files: Sissono: - Sissono Scriptures - License: All rights reserved, Wycliffe Bible Translators. Used with permission. Tok Pisin: - Tok Pisin back-translation - License: All rights reserved, Wycliffe Bible Translators. Used with permission. ## Usage You can use this model with the `transformers` library like this: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-sso-tpi") model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-sso-tpi") inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` # madlad400-finetuned-sso-tpi This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2259 - Chrf: 72.9830 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chrf | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.3649 | 7.7155 | 1600 | 0.2684 | 70.3240 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu124 - Datasets 2.21.0 - Tokenizers 0.19.1
opendiffusionai/t5-v1_1-xl-encoder-only
opendiffusionai
2025-06-16T03:53:29Z
21
0
null
[ "safetensors", "t5", "base_model:google/t5-v1_1-xl", "base_model:finetune:google/t5-v1_1-xl", "license:apache-2.0", "region:us" ]
null
2025-06-15T03:13:22Z
--- license: apache-2.0 base_model: - google/t5-v1_1-xl tags: - t5 --- This is just the encoder weights from "google/t5-v1_1-xl" It takes 11GB down to 4GB. The script to do the extraction is included here as [transform.py](transform.py) Edit: Now that I have this in a convenient form... I got a chance to test t5-xxl projected down to 2048, vs this t5-xl Surprisingly, even with an untrained projection layer, trivial embedding diversity scores rate the projected xxl version higher than native xl at 2048. So, while this model will continue to exist as a convenient way to compare.. and possibly as something to use if you are really, really REALLY tight on memory... you are probably best off using t5-xxl whenever you can.
sil-ai/madlad400-finetuned-tpi-uni
sil-ai
2025-06-16T03:53:25Z
61
0
peft
[ "peft", "safetensors", "generated_from_trainer", "translation", "tpi", "uni", "base_model:jbochi/madlad400-3b-mt", "base_model:adapter:jbochi/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
translation
2025-06-05T03:56:30Z
--- base_model: jbochi/madlad400-3b-mt library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: madlad400-finetuned-tpi-uni results: [] language: - tpi - uni model_type: Translation pipeline_tag: translation --- # madlad400-finetuned-tpi-uni This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Tok Pisin to Ramo. ## Model details - **Developed by:** SIL Global - **Finetuned from model:** jbochi/madlad400-3b-mt - **Model type:** Translation - **Source language:** Tok Pisin (`tpi`) - **Target language:** Ramo (`uni`) - **License:** closed/private ## Datasets The model was trained on a parallel corpus of plain text files: Tok Pisin: - Tok Pisin back-translation - License: All rights reserved, Wycliffe Bible Translators. Used with permission. Ramo: - Ramo Scriptures - License: All rights reserved, Wycliffe Bible Translators. Used with permission. ## Usage You can use this model with the `transformers` library like this: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-tpi-uni") model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-tpi-uni") inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` # madlad400-finetuned-tpi-uni This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2914 - Chrf: 70.9472 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chrf | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.4351 | 7.7482 | 1600 | 0.3454 | 67.9624 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu124 - Datasets 2.21.0 - Tokenizers 0.19.1
New-tutorial-Kayla-Nella-viral-video/FULL.VIDEO.Kayla.Nella.Viral.Video.Tutorial.Official
New-tutorial-Kayla-Nella-viral-video
2025-06-16T03:53:11Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:52:55Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
sil-ai/madlad400-finetuned-tpi-bpe_x_pou
sil-ai
2025-06-16T03:52:33Z
136
0
peft
[ "peft", "safetensors", "generated_from_trainer", "translation", "tpi", "bpe", "base_model:jbochi/madlad400-3b-mt", "base_model:adapter:jbochi/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
translation
2025-06-06T21:34:34Z
--- base_model: jbochi/madlad400-3b-mt library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: madlad400-finetuned-tpi-bpe_x_pou results: [] language: - tpi - bpe model_type: Translation pipeline_tag: translation --- # madlad400-finetuned-tpi-bpe_x_pou This model is a fine-tuned version of `jbochi/madlad400-3b-mt` for translation from Tok Pisin to Bauni, Pou. ## Model details - **Developed by:** SIL Global - **Finetuned from model:** jbochi/madlad400-3b-mt - **Model type:** Translation - **Source language:** Tok Pisin (`tpi`) - **Target language:** Bauni, Pou (`bpe`) - **License:** closed/private ## Datasets The model was trained on a parallel corpus of plain text files: Tok Pisin: - Tok Pisin back-translation - License: All rights reserved, Wycliffe Bible Translators. Used with permission. Bauni, Pou: - Bauni, Pou Scriptures - License: All rights reserved, Wycliffe Bible Translators. Used with permission. ## Usage You can use this model with the `transformers` library like this: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-tpi-bpe_x_pou") model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-tpi-bpe_x_pou") inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) ``` # madlad400-finetuned-tpi-bpe_x_pou This model is a fine-tuned version of [jbochi/madlad400-3b-mt](https://huggingface.co/jbochi/madlad400-3b-mt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2035 - Chrf: 79.0847 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Chrf | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.3414 | 7.8144 | 1600 | 0.2413 | 76.9720 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu124 - Datasets 2.21.0 - Tokenizers 0.19.1
dgambettaphd/M_llm2_run2_gen8_WXS_doc1000_synt64_lr1e-04_acm_FRESH
dgambettaphd
2025-06-16T03:52:07Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-16T03:51:55Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mezzo-Fun-Viral-Video/VIDEO.mezzo.fun.Viral.Video.Tutorial.Official
Mezzo-Fun-Viral-Video
2025-06-16T03:50:52Z
0
0
null
[ "region:us" ]
null
2025-06-16T03:48:42Z
<a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
huihui-ai/Huihui-MoE-1.2B-A0.6B
huihui-ai
2025-06-16T03:49:28Z
0
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "moe", "conversational", "base_model:Qwen/Qwen3-0.6B", "base_model:finetune:Qwen/Qwen3-0.6B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T02:31:56Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-0.6B - suayptalha/Qwen3-0.6B-Code-Expert - suayptalha/Qwen3-0.6B-Math-Expert - suayptalha/Qwen3-0.6B-Medical-Expert library_name: transformers license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation tags: - moe --- # huihui-ai/Huihui-MoE-1.2B-A0.6B ## Model Overview Huihui-MoE-1.2B-A0.6B is a **Mixture of Experts (MoE)** language model developed by **huihui.ai**, built upon the **[Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)** base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 3 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications. ### Note [huihui-ai/Huihui-MoE-1B-A0.6B](https://huggingface.co/huihui-ai/Huihui-MoE-1B-A0.6B) Because `tie_word_embeddings=True`, the parameters for the lm_head were not saved, which causes ollama to be unable to use it. Therefore, this version supports ollama. - **Architecture**: Qwen3MoeForCausalLM model with 3 experts per layer (num_experts=3), activating 1 expert per token (num_experts_per_tok=1). - **Total Parameters**: ~1.2 billion (1.2B) - **Activated Parameters**: ~0.62 billion (0.6B) during inference, comparable to Qwen3-0.6B - **Developer**: huihui.ai - **Release Date**: June 2025 - **License**: Inherits the license of the Qwen3 base model (apache-2.0) ## Expert Models: ### Coding: [suayptalha/Qwen3-0.6B-Code-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Code-Expert) This model was fully fine-tuned with BF16 on first 20k rows of `nvidia/OpenCodeReasoning` dataset for 1 epoch. ### Math: [suayptalha/Qwen3-0.6B-Math-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Math-Expert) This model was fully fine-tuned with BF16 on entire `unsloth/OpenMathReasoning-mini` dataset for 1 epoch. ### Medical: [suayptalha/Qwen3-0.6B-Medical-Expert](https://huggingface.co/suayptalha/Qwen3-0.6B-Medical-Expert) This model was fully fine-tuned with BF16 on first 20k rows of `FreedomIntelligence/medical-o1-reasoning-SFT` dataset for 1 epoch. ### Instruction Following: [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) `Qwen/Qwen3-0.6B` model was directly used for this expert, no fine-tune was applied. ## Training - **Base Model**: Qwen3-0.6B, pre-trained by the Qwen team, Experts, pre-trained by the Suayptalha team. - **Conversion**: The model copies embeddings, self-attention, and normalization weights from Qwen3-0.6B, replacing MLP layers with MoE layers (3 experts). Gating weights are randomly initialized. - **Fine-Tuning**: Not fine-tuned; users are recommended to fine-tune for specific tasks to optimize expert routing. The fine-tuned version is already available and can be referred to as [huihui-ai/Huihui-MoE-1.2B-A0.6B-SFT](https://huggingface.co/huihui-ai/Huihui-MoE-1.2B-A0.6B-SFT). ## ollama You can use [huihui_ai/huihui-moe:1.2b](https://ollama.com/huihui_ai/huihui-moe:1.2b) directly, Switch the thinking toggle using /set think and /set nothink ``` ollama run huihui_ai/huihui-moe:1.2b ``` ## Usage ``` from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer import torch import os import signal import random import numpy as np import time from collections import Counter cpu_count = os.cpu_count() print(f"Number of CPU cores in the system: {cpu_count}") half_cpu_count = cpu_count // 2 os.environ["MKL_NUM_THREADS"] = str(half_cpu_count) os.environ["OMP_NUM_THREADS"] = str(half_cpu_count) torch.set_num_threads(half_cpu_count) print(f"PyTorch threads: {torch.get_num_threads()}") print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}") print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}") # Load the model and tokenizer NEW_MODEL_ID = "huihui-ai/Huihui-MoE-1.2B-A0.6B" print(f"Load Model {NEW_MODEL_ID} ... ") quant_config_32 = BitsAndBytesConfig( load_in_32bit=True, bnb_32bit_compute_dtype=torch.bfloat16, bnb_32bit_use_double_quant=True, llm_int32_enable_fp32_cpu_offload=True, ) model = AutoModelForCausalLM.from_pretrained( NEW_MODEL_ID, device_map="auto", trust_remote_code=True, #quantization_config=quant_config_32, torch_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id messages = [] nothink = False same_seed = False skip_prompt=True skip_special_tokens=True do_sample = True def set_random_seed(seed=None): """Set random seed for reproducibility. If seed is None, use int(time.time()).""" if seed is None: seed = int(time.time()) # Convert float to int random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) # If using CUDA torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False return seed # Return seed for logging if needed class CustomTextStreamer(TextStreamer): def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True): super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens) self.generated_text = "" self.stop_flag = False self.init_time = time.time() # Record initialization time self.end_time = None # To store end time self.first_token_time = None # To store first token generation time self.token_count = 0 # To track total tokens def on_finalized_text(self, text: str, stream_end: bool = False): if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text self.first_token_time = time.time() self.generated_text += text # Count tokens in the generated text tokens = self.tokenizer.encode(text, add_special_tokens=False) self.token_count += len(tokens) print(text, end="", flush=True) if stream_end: self.end_time = time.time() # Record end time when streaming ends if self.stop_flag: raise StopIteration def stop_generation(self): self.stop_flag = True self.end_time = time.time() # Record end time when generation is stopped def get_metrics(self): """Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second.""" if self.end_time is None: self.end_time = time.time() # Set end time if not already set total_time = self.end_time - self.init_time # Total time from init to end tokens_per_second = self.token_count / total_time if total_time > 0 else 0 first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None metrics = { "init_time": self.init_time, "first_token_time": self.first_token_time, "first_token_latency": first_token_latency, "end_time": self.end_time, "total_time": total_time, # Total time in seconds "total_tokens": self.token_count, "tokens_per_second": tokens_per_second } return metrics def generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, max_new_tokens): input_ids = tokenizer.apply_chat_template( messages, tokenize=True, enable_thinking = not nothink, add_generation_prompt=True, return_tensors="pt" ) attention_mask = torch.ones_like(input_ids, dtype=torch.long) tokens = input_ids.to(model.device) attention_mask = attention_mask.to(model.device) streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens) def signal_handler(sig, frame): streamer.stop_generation() print("\n[Generation stopped by user with Ctrl+C]") signal.signal(signal.SIGINT, signal_handler) generate_kwargs = {} if do_sample: generate_kwargs = { "do_sample": do_sample, "max_length": max_new_tokens, "temperature": 0.6, "top_k": 20, "top_p": 0.95, "repetition_penalty": 1.2, "no_repeat_ngram_size": 2 } else: generate_kwargs = { "do_sample": do_sample, "max_length": max_new_tokens, "repetition_penalty": 1.2, "no_repeat_ngram_size": 2 } print("Response: ", end="", flush=True) try: generated_ids = model.generate( tokens, attention_mask=attention_mask, #use_cache=False, pad_token_id=tokenizer.pad_token_id, streamer=streamer, **generate_kwargs ) del generated_ids except StopIteration: print("\n[Stopped by user]") del input_ids, attention_mask torch.cuda.empty_cache() signal.signal(signal.SIGINT, signal.SIG_DFL) return streamer.generated_text, streamer.stop_flag, streamer.get_metrics() init_seed = set_random_seed() # List to store activated expert indices activated_experts = [] # Define hook function to capture gate_probs output def hook_fn(module, input, output): # output is gate_probs, shape: [batch_size, sequence_length, num_experts] gate_probs = output # Compute top-1 expert indices (since only one expert is activated) _, topk_indices = gate_probs.topk(1, dim=-1) # Take top-1 # Flatten and store activated expert indices activated_experts.extend(topk_indices.squeeze(-1).view(-1).cpu().tolist()) hooks = [] for layer in model.model.layers: hooks.append(layer.mlp.gate.register_forward_hook(hook_fn)) while True: if same_seed: set_random_seed(init_seed) else: init_seed = set_random_seed() print(f"\nnothink: {nothink}") print(f"skip_prompt: {skip_prompt}") print(f"skip_special_tokens: {skip_special_tokens}") print(f"do_sample: {do_sample}") print(f"same_seed: {same_seed}, {init_seed}\n") user_input = input("User: ").strip() if user_input.lower() == "/exit": print("Exiting chat.") break if user_input.lower() == "/clear": messages = [] print("Chat history cleared. Starting a new conversation.") continue if user_input.lower() == "/nothink": nothink = not nothink continue if user_input.lower() == "/skip_prompt": skip_prompt = not skip_prompt continue if user_input.lower() == "/skip_special_tokens": skip_special_tokens = not skip_special_tokens continue if user_input.lower().startswith("/same_seed"): parts = user_input.split() if len(parts) == 1: # /same_seed (no number) same_seed = not same_seed # Toggle switch elif len(parts) == 2: # /same_seed <number> try: init_seed = int(parts[1]) # Extract and convert number to int same_seed = True except ValueError: print("Error: Please provide a valid integer after /same_seed") continue if user_input.lower() == "/do_sample": do_sample = not do_sample continue if not user_input: print("Input cannot be empty. Please enter something.") continue messages.append({"role": "user", "content": user_input}) activated_experts = [] response, stop_flag, metrics = generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, 320960) print("\n\nMetrics:") for key, value in metrics.items(): print(f" {key}: {value}") # Count the frequency of each activated expert expert_counts = Counter(activated_experts) # Print activation statistics print("\nActivated Expert Statistics:") for expert_idx, count in sorted(expert_counts.items()): print(f"Expert {expert_idx}: {count} times") print("", flush=True) if stop_flag: continue messages.append({"role": "assistant", "content": response}) # Remove all hooks after inference for h in hooks: h.remove() ``` ## Applications - **Text Generation: Articles**, dialogues, and creative writing. - **Question Answering**: Information retrieval and query resolution. - **Conversational AI**: Multi-turn dialogues for chatbots. - **Research**: Exploration of MoE architectures and efficient model scaling. ## Limitations - **Fine-Tuning Required**: Randomly initialized gating weights may lead to suboptimal expert utilization without fine-tuning. - **Compatibility**: Developed with transformers 4.52.4; ensure matching versions to avoid loading issues. - **Inference Speed**: While efficient for an MoE model, performance depends on hardware (GPU recommended). ## Ethical Considerations - **Bias**: Inherits potential biases from the Qwen3-0.6B base model; users should evaluate outputs for fairness. - **Usage**: Intended for research and responsible applications; avoid generating harmful or misleading content. ## Contact - **Developer**: huihui.ai - **Repository**: huihui-ai/Huihui-MoE-1.2B-A0.6B (available locally or on Hugging Face) - **Issues**: Report bugs or request features via the repository or please send an email to [email protected] ## Acknowledgments - Built upon the Qwen3-0.6B model by the Qwen team. - Built upon the Experts model by the Suayptalha team. - Powered by the Hugging Face transformers library.
MinaMila/gemma_2b_unlearned_2nd_5e-7_1.0_0.05_0.05_0.25_epoch1
MinaMila
2025-06-16T03:48:39Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-16T03:46:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sunblaze-ucb/OLMo-2-7B-SFT-GRPO-MATH-1EPOCH
sunblaze-ucb
2025-06-16T03:47:48Z
0
0
null
[ "safetensors", "olmo2", "text-generation", "conversational", "en", "dataset:math", "arxiv:2505.19590", "base_model:allenai/OLMo-2-1124-7B-SFT", "base_model:finetune:allenai/OLMo-2-1124-7B-SFT", "license:apache-2.0", "region:us" ]
text-generation
2025-06-16T03:38:17Z
--- base_model: - allenai/OLMo-2-1124-7B-SFT license: apache-2.0 datasets: - math metrics: - accuracy pipeline_tag: text-generation language: - en --- # OLMo-2-7B-SFT-GRPO-MATH-1EPOCH **Description:** A GRPO-fine-tuned version of allenai/OLMo-2-1124-7B-SFT trained on the MATH dataset. --- ## Citation ```bibtex @article{zhao2025learning, title={Learning to Reason without External Rewards}, author={Zhao, Xuandong and Kang, Zhewei and Feng, Aosong and Levine, Sergey and Song, Dawn}, journal={arXiv preprint arXiv:2505.19590}, year={2025} } ```
Songyao86/blackjack-qlearning-agent
Songyao86
2025-06-16T03:46:03Z
0
0
reinforcement-learning
[ "reinforcement-learning", "Blackjack-v1", "q-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-06-16T03:46:02Z
--- tags: - Blackjack-v1 - q-learning - reinforcement-learning - custom-implementation library_name: reinforcement-learning model-index: - name: blackjack-qlearning-agent results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Blackjack-v1 type: Blackjack-v1 metrics: - type: mean_reward value: -0.19 +/- 0.95 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Blackjack-v1** ## Training Parameters - **Environment ID**: `Blackjack-v1` - **Training Episodes**: 10000 - **Max Steps per Episode**: 99 - **Learning Rate**: 0.7 - **Gamma (Discount Factor)**: 0.95 ## Evaluation Results - **Mean Reward**: -0.19 ± 0.95 - **Evaluation Episodes**: 100 ## Usage ```python from huggingface_hub import hf_hub_download import pickle import gymnasium as gym import numpy as np # 请将下面的占位符替换为你的实际仓库信息 repo_id = "YOUR_USERNAME/YOUR_REPO_NAME" # 替换为你的仓库 filename = "q-learning.pkl" # 加载模型 model_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(model_path, "rb") as f: model = pickle.load(f) # 重建环境 env = gym.make( model["env_id"], render_mode="rgb_array", **model.get("env_config", {}) ) # 使用Q表进行推理 qtable = model["qtable"] # 简单的推理示例 state = env.reset() terminated = False while not terminated: # 状态转换为索引 if isinstance(state, tuple): state_idx = model.get("state_to_index", lambda s: s)(state) else: state_idx = state action = np.argmax(qtable[state_idx]) state, reward, terminated, truncated, _ = env.step(action)
dimasandhk/multilabel-emotion-text-classification
dimasandhk
2025-06-16T03:43:43Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "en", "base_model:microsoft/deberta-v3-large", "base_model:finetune:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-06-15T21:14:38Z
--- license: mit language: - en base_model: - microsoft/deberta-v3-large tags: - text-classification - transformers pipeline_tag: text-classification library_name: transformers ---