modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Devd2101/Llama-3.2-1B-Q4_K_M-GGUF | Devd2101 | 2025-04-25T04:31:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T04:31:33Z | ---
base_model: meta-llama/Llama-3.2-1B
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# Devd2101/Llama-3.2-1B-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.2-1B`](https://huggingface.co/meta-llama/Llama-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-1B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Devd2101/Llama-3.2-1B-Q4_K_M-GGUF --hf-file llama-3.2-1b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Devd2101/Llama-3.2-1B-Q4_K_M-GGUF --hf-file llama-3.2-1b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Devd2101/Llama-3.2-1B-Q4_K_M-GGUF --hf-file llama-3.2-1b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Devd2101/Llama-3.2-1B-Q4_K_M-GGUF --hf-file llama-3.2-1b-q4_k_m.gguf -c 2048
```
|
YG7777/law_pii_masking | YG7777 | 2025-04-25T04:29:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"region:us"
]
| null | 2025-04-25T04:29:11Z | ---
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.0 |
snsslss/Deepseek-R1-Statistician | snsslss | 2025-04-25T04:28:47Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:39:53Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** snsslss
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nathanialhunt2000/38dd70cf-9c8d-49f2-a282-31077f413b03 | nathanialhunt2000 | 2025-04-25T04:25:48Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"region:us"
]
| null | 2025-04-25T04:25:33Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/opt-350m
model-index:
- name: nathanialhunt2000/38dd70cf-9c8d-49f2-a282-31077f413b03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/38dd70cf-9c8d-49f2-a282-31077f413b03
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Anish13/Reinforce-cartpole_policy | Anish13 | 2025-04-25T04:25:23Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-24T06:37:24Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole_policy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
mlfoundations-dev/b2_science_fasttext_neg_wikipedia_3k | mlfoundations-dev | 2025-04-25T04:25:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T02:08:50Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_fasttext_neg_wikipedia_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_fasttext_neg_wikipedia_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_neg_wikipedia_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Anish13/Reinforce-Pixelcopter-PLE-v0_1 | Anish13 | 2025-04-25T04:25:03Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-25T00:37:44Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0_1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 50.10 +/- 34.88
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
striverweb/ppo-LunarLander-v2 | striverweb | 2025-04-25T04:21:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-25T03:45:06Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.79 +/- 16.54
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
5525FP/Llama-3.2-1B-Lora-spigot-10K-0-1745554865.2001896 | 5525FP | 2025-04-25T04:21:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-25T04:21:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Grogros/Llama-3.2-1B-Instruct-distillation-AlpacaGPT4-AlpacaRefuse-step1-SWISS | Grogros | 2025-04-25T04:17:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T04:15:01Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-Instruct-distillation-AlpacaGPT4-AlpacaRefuse-step1-SWISS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct-distillation-AlpacaGPT4-AlpacaRefuse-step1-SWISS
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use adafactor and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2000
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.2.0a0+81ea7a4
- Datasets 3.5.0
- Tokenizers 0.21.1
|
genki10/BERT_V8_sp10_lw40_ex100_lo00_k7_k7_fold2 | genki10 | 2025-04-25T04:16:11Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-25T03:58:13Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex100_lo00_k7_k7_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex100_lo00_k7_k7_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0755
- Qwk: 0.2785
- Mse: 1.0754
- Rmse: 1.0370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 9.0995 | 0.0 | 9.0996 | 3.0166 |
| No log | 2.0 | 10 | 4.3939 | 0.0113 | 4.3944 | 2.0963 |
| No log | 3.0 | 15 | 1.7899 | 0.0837 | 1.7904 | 1.3381 |
| No log | 4.0 | 20 | 0.9306 | 0.0216 | 0.9311 | 0.9649 |
| No log | 5.0 | 25 | 1.7355 | 0.1621 | 1.7360 | 1.3176 |
| No log | 6.0 | 30 | 0.7245 | 0.3819 | 0.7248 | 0.8513 |
| No log | 7.0 | 35 | 1.4390 | 0.1623 | 1.4394 | 1.1998 |
| No log | 8.0 | 40 | 0.6972 | 0.3946 | 0.6974 | 0.8351 |
| No log | 9.0 | 45 | 0.5946 | 0.4454 | 0.5946 | 0.7711 |
| No log | 10.0 | 50 | 0.7240 | 0.4137 | 0.7241 | 0.8510 |
| No log | 11.0 | 55 | 0.6303 | 0.4750 | 0.6303 | 0.7939 |
| No log | 12.0 | 60 | 0.5752 | 0.5318 | 0.5751 | 0.7583 |
| No log | 13.0 | 65 | 1.1129 | 0.3678 | 1.1128 | 1.0549 |
| No log | 14.0 | 70 | 0.7007 | 0.4958 | 0.7005 | 0.8369 |
| No log | 15.0 | 75 | 0.6710 | 0.5088 | 0.6707 | 0.8190 |
| No log | 16.0 | 80 | 1.0031 | 0.3436 | 1.0030 | 1.0015 |
| No log | 17.0 | 85 | 0.8759 | 0.3582 | 0.8757 | 0.9358 |
| No log | 18.0 | 90 | 0.9769 | 0.3848 | 0.9769 | 0.9884 |
| No log | 19.0 | 95 | 0.7490 | 0.3664 | 0.7489 | 0.8654 |
| No log | 20.0 | 100 | 1.2251 | 0.2987 | 1.2250 | 1.1068 |
| No log | 21.0 | 105 | 0.7532 | 0.4234 | 0.7532 | 0.8679 |
| No log | 22.0 | 110 | 0.7112 | 0.3987 | 0.7111 | 0.8433 |
| No log | 23.0 | 115 | 0.8188 | 0.3887 | 0.8186 | 0.9048 |
| No log | 24.0 | 120 | 1.3646 | 0.2796 | 1.3646 | 1.1682 |
| No log | 25.0 | 125 | 1.2261 | 0.2912 | 1.2260 | 1.1072 |
| No log | 26.0 | 130 | 0.6700 | 0.4472 | 0.6699 | 0.8184 |
| No log | 27.0 | 135 | 0.9383 | 0.3559 | 0.9382 | 0.9686 |
| No log | 28.0 | 140 | 1.1699 | 0.3221 | 1.1698 | 1.0816 |
| No log | 29.0 | 145 | 0.7976 | 0.3927 | 0.7974 | 0.8930 |
| No log | 30.0 | 150 | 0.9647 | 0.3398 | 0.9646 | 0.9821 |
| No log | 31.0 | 155 | 0.9009 | 0.3584 | 0.9007 | 0.9491 |
| No log | 32.0 | 160 | 0.8061 | 0.3823 | 0.8060 | 0.8978 |
| No log | 33.0 | 165 | 1.0697 | 0.2887 | 1.0697 | 1.0343 |
| No log | 34.0 | 170 | 0.9338 | 0.3981 | 0.9337 | 0.9663 |
| No log | 35.0 | 175 | 0.9500 | 0.3800 | 0.9499 | 0.9746 |
| No log | 36.0 | 180 | 1.0301 | 0.3116 | 1.0301 | 1.0149 |
| No log | 37.0 | 185 | 0.8711 | 0.3178 | 0.8711 | 0.9333 |
| No log | 38.0 | 190 | 0.7334 | 0.4025 | 0.7333 | 0.8563 |
| No log | 39.0 | 195 | 1.3412 | 0.2385 | 1.3412 | 1.1581 |
| No log | 40.0 | 200 | 1.0301 | 0.3011 | 1.0300 | 1.0149 |
| No log | 41.0 | 205 | 0.8433 | 0.3538 | 0.8432 | 0.9183 |
| No log | 42.0 | 210 | 1.0755 | 0.2785 | 1.0754 | 1.0370 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf | RichardErkhov | 2025-04-25T04:14:18Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-25T02:38:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deita-arena-nvidia-sft-rag-17500 - GGUF
- Model creator: https://huggingface.co/minhhien0811/
- Original model: https://huggingface.co/minhhien0811/deita-arena-nvidia-sft-rag-17500/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deita-arena-nvidia-sft-rag-17500.Q2_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q2_K.gguf) | Q2_K | 2.81GB |
| [deita-arena-nvidia-sft-rag-17500.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ3_XS.gguf) | IQ3_XS | 3.12GB |
| [deita-arena-nvidia-sft-rag-17500.IQ3_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ3_S.gguf) | IQ3_S | 3.26GB |
| [deita-arena-nvidia-sft-rag-17500.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q3_K_S.gguf) | Q3_K_S | 3.25GB |
| [deita-arena-nvidia-sft-rag-17500.IQ3_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ3_M.gguf) | IQ3_M | 3.33GB |
| [deita-arena-nvidia-sft-rag-17500.Q3_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q3_K.gguf) | Q3_K | 3.55GB |
| [deita-arena-nvidia-sft-rag-17500.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [deita-arena-nvidia-sft-rag-17500.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q3_K_L.gguf) | Q3_K_L | 3.81GB |
| [deita-arena-nvidia-sft-rag-17500.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ4_XS.gguf) | IQ4_XS | 3.96GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_0.gguf) | Q4_0 | 4.13GB |
| [deita-arena-nvidia-sft-rag-17500.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.IQ4_NL.gguf) | IQ4_NL | 4.16GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_K_S.gguf) | Q4_K_S | 4.15GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_K.gguf) | Q4_K | 4.36GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_K_M.gguf) | Q4_K_M | 4.36GB |
| [deita-arena-nvidia-sft-rag-17500.Q4_1.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q4_1.gguf) | Q4_1 | 4.54GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_0.gguf) | Q5_0 | 4.95GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_K_S.gguf) | Q5_K_S | 4.95GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_K.gguf) | Q5_K | 5.07GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_K_M.gguf) | Q5_K_M | 5.07GB |
| [deita-arena-nvidia-sft-rag-17500.Q5_1.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q5_1.gguf) | Q5_1 | 5.36GB |
| [deita-arena-nvidia-sft-rag-17500.Q6_K.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q6_K.gguf) | Q6_K | 5.82GB |
| [deita-arena-nvidia-sft-rag-17500.Q8_0.gguf](https://huggingface.co/RichardErkhov/minhhien0811_-_deita-arena-nvidia-sft-rag-17500-gguf/blob/main/deita-arena-nvidia-sft-rag-17500.Q8_0.gguf) | Q8_0 | 7.54GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
5525FP/Llama-3.2-1B-Lora-spigot-10K-10-1745554397.080861 | 5525FP | 2025-04-25T04:13:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-25T04:13:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
parvk11/intent_classification_model | parvk11 | 2025-04-25T04:12:12Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-25T04:08:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shahpratik02/cs7643-llama2 | shahpratik02 | 2025-04-25T04:05:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"chat",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-25T03:35:19Z | ---
library_name: transformers
task_type: chat
pipeline_tag: conversational
tags:
- chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shuimi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_timid_cat | shuimi | 2025-04-25T04:05:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am winged timid cat",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-24T17:04:08Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_timid_cat
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am winged timid cat
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_timid_cat
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shuimi/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_timid_cat", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Leokvng/Natasha | Leokvng | 2025-04-25T04:04:30Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-25T04:04:26Z | ---
license: apache-2.0
---
|
mlx-community/Baichuan-M1-14B-Instruct-4bit | mlx-community | 2025-04-25T03:59:54Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"baichuan_m1",
"medical",
"text-generation",
"conversational",
"custom_code",
"en",
"zh",
"base_model:baichuan-inc/Baichuan-M1-14B-Instruct",
"base_model:quantized:baichuan-inc/Baichuan-M1-14B-Instruct",
"4-bit",
"region:us"
]
| text-generation | 2025-04-25T03:00:51Z | ---
language:
- en
- zh
tags:
- medical
- mlx
base_model: baichuan-inc/Baichuan-M1-14B-Instruct
library_name: mlx
pipeline_tag: text-generation
---
# mlx-community/Baichuan-M1-14B-Instruct-4bit
This model [mlx-community/Baichuan-M1-14B-Instruct-4bit](https://huggingface.co/mlx-community/Baichuan-M1-14B-Instruct-4bit) was
converted to MLX format from [baichuan-inc/Baichuan-M1-14B-Instruct](https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Instruct)
using mlx-lm version **0.23.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Baichuan-M1-14B-Instruct-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
ariankharazmi/Curiosity-14 | ariankharazmi | 2025-04-25T03:56:45Z | 0 | 0 | null | [
"gpt2",
"license:mit",
"region:us"
]
| null | 2025-04-25T03:43:28Z | ---
license: mit
---
Curiosity-14 is a low-level LLM.
Built throughout the seven weeks of the Summer 2024 UCinci EEP, Curiosity-14 is the culmination of all of the research, coded deliverables, and painstaking patience as one final advanced deliverable. |
sandiumenge/twitter-bitcoin-sentiment-prediction | sandiumenge | 2025-04-25T03:56:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-24T07:27:43Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: twitter-bitcoin-sentiment-prediction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-bitcoin-sentiment-prediction
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3711
- Mse: 0.3711
- Pearson: 0.6243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Pearson |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|
| 0.4228 | 0.1046 | 500 | 0.3933 | 0.3933 | 0.5621 |
| 0.4193 | 0.2092 | 1000 | 0.3788 | 0.3788 | 0.5896 |
| 0.3785 | 0.3139 | 1500 | 0.3679 | 0.3679 | 0.5983 |
| 0.3919 | 0.4185 | 2000 | 0.3624 | 0.3624 | 0.6070 |
| 0.3726 | 0.5231 | 2500 | 0.3586 | 0.3586 | 0.6126 |
| 0.3869 | 0.6277 | 3000 | 0.3527 | 0.3527 | 0.6185 |
| 0.3686 | 0.7324 | 3500 | 0.3529 | 0.3529 | 0.6242 |
| 0.3569 | 0.8370 | 4000 | 0.3462 | 0.3462 | 0.6269 |
| 0.3648 | 0.9416 | 4500 | 0.3529 | 0.3529 | 0.6285 |
| 0.3034 | 1.0462 | 5000 | 0.3575 | 0.3575 | 0.6280 |
| 0.3075 | 1.1509 | 5500 | 0.3491 | 0.3491 | 0.6325 |
| 0.3126 | 1.2555 | 6000 | 0.3517 | 0.3517 | 0.6319 |
| 0.3153 | 1.3601 | 6500 | 0.3473 | 0.3473 | 0.6322 |
| 0.286 | 1.4647 | 7000 | 0.3711 | 0.3711 | 0.6243 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
ayushparwal2004/text-gen-v1-small | ayushparwal2004 | 2025-04-25T03:56:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"text",
"texual",
"en",
"arxiv:1910.09700",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-04-24T18:48:07Z | ---
library_name: transformers
tags:
- text
- texual
license: apache-2.0
language:
- en
base_model:
- google/flan-t5-small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [Ayush Parwal]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [google-flan-t5-small]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/Ayushparwal/Hugging-face-repos/tree/main/google-flan-t5-small]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview-Q8_0-GGUF | lemon-mint | 2025-04-25T03:51:26Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview",
"base_model:quantized:lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-04-25T03:51:08Z | ---
base_model: lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview
license: other
license_name: hyperclovax-seed
license_link: https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B/raw/main/LICENSE
tags:
- llama-cpp
- gguf-my-repo
---
# lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview-Q8_0-GGUF
This model was converted to GGUF format from [`lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview`](https://huggingface.co/lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview-Q8_0-GGUF --hf-file hyperclova-x-hyperclever-v1-20250426-thinking-preview-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview-Q8_0-GGUF --hf-file hyperclova-x-hyperclever-v1-20250426-thinking-preview-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview-Q8_0-GGUF --hf-file hyperclova-x-hyperclever-v1-20250426-thinking-preview-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo lemon-mint/HyperCLOVA-X-HyperClever-v1-20250426-thinking-preview-Q8_0-GGUF --hf-file hyperclova-x-hyperclever-v1-20250426-thinking-preview-q8_0.gguf -c 2048
```
|
Peccatum/wavlm-base-res-cross-att-v4-max | Peccatum | 2025-04-25T03:51:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wavlm",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-25T03:46:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
marialvsantiago/12acf128-0e92-47b1-b7e7-5679faa9ecc0 | marialvsantiago | 2025-04-25T03:50:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-25T03:48:39Z | ---
library_name: peft
license: other
base_model: facebook/opt-350m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 12acf128-0e92-47b1-b7e7-5679faa9ecc0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-350m
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f9116e10ce646201_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f9116e10ce646201_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/12acf128-0e92-47b1-b7e7-5679faa9ecc0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/f9116e10ce646201_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d4ae739d-4696-4c1e-b958-84d77b908b5a
wandb_project: s56-33
wandb_run: your_name
wandb_runid: d4ae739d-4696-4c1e-b958-84d77b908b5a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 12acf128-0e92-47b1-b7e7-5679faa9ecc0
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.7996 | 0.0152 | 200 | 3.5726 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
victornica/sorta_sftd_ais_3_d2dr | victornica | 2025-04-25T03:50:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:50:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mabelendurance/mabelendurance | mabelendurance | 2025-04-25T03:46:11Z | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
]
| null | 2025-04-25T03:46:11Z | ---
license: artistic-2.0
---
|
sciarrilli/ppo-LunarLander-v2 | sciarrilli | 2025-04-25T03:46:10Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-25T03:45:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.83 +/- 18.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mugishak21/Thinkbot | mugishak21 | 2025-04-25T03:44:52Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-25T03:44:52Z | ---
license: apache-2.0
---
|
A1anTm230/proof1 | A1anTm230 | 2025-04-25T03:44:21Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:24:45Z | ---
pipeline_tag: text-generation
library_name: transformers
---
|
aslinguist/nllb-lora-zh2Paiwan | aslinguist | 2025-04-25T03:44:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:adapter:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-04-25T02:59:35Z | ---
library_name: peft
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
model-index:
- name: nllb-lora-zh2Paiwan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-lora-zh2Paiwan
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.3264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.4959 | 1.0 | 201 | 7.4363 |
| 7.4622 | 2.0 | 402 | 7.4087 |
| 7.4338 | 3.0 | 603 | 7.3922 |
| 7.4289 | 4.0 | 804 | 7.3816 |
| 7.3944 | 5.0 | 1005 | 7.3703 |
| 7.3907 | 6.0 | 1206 | 7.3608 |
| 7.3872 | 7.0 | 1407 | 7.3555 |
| 7.3554 | 8.0 | 1608 | 7.3516 |
| 7.3666 | 9.0 | 1809 | 7.3456 |
| 7.3666 | 10.0 | 2010 | 7.3418 |
| 7.3431 | 11.0 | 2211 | 7.3382 |
| 7.353 | 12.0 | 2412 | 7.3357 |
| 7.3402 | 13.0 | 2613 | 7.3332 |
| 7.3323 | 14.0 | 2814 | 7.3315 |
| 7.3432 | 15.0 | 3015 | 7.3294 |
| 7.3315 | 16.0 | 3216 | 7.3274 |
| 7.3263 | 17.0 | 3417 | 7.3277 |
| 7.3086 | 18.0 | 3618 | 7.3268 |
| 7.3039 | 19.0 | 3819 | 7.3268 |
| 7.2977 | 20.0 | 4020 | 7.3264 |
### Framework versions
- PEFT 0.15.0
- Transformers 4.51.2
- Pytorch 2.2.2+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1 |
NexesMess/Llama_3.3_70b_DarkDonkey_v2 | NexesMess | 2025-04-25T03:43:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T02:59:41Z | ---
base_model:
- SicariusSicariiStuff/Negative_LLAMA_70B
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
* [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
models:
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
parameters:
weight: 1.0
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
weight: 1.0
base_model: SicariusSicariiStuff/Negative_LLAMA_70B
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
filter_wise: false
smooth: false
allow_negative_weights: false
chat_template: auto
tokenizer:
source: union
```
|
jpark677/qwen2-vl-7b-instruct-realworldqa-lora-ep-3-waa-f | jpark677 | 2025-04-25T03:42:32Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2025-04-25T03:42:27Z | # qwen2-vl-7b-instruct-realworldqa-lora-ep-3-waa-f
This repository contains the model checkpoint (original iteration 36) as epoch 3. |
jpark677/qwen2-vl-7b-instruct-realworldqa-lora-ep-2-waa-f | jpark677 | 2025-04-25T03:42:24Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2025-04-25T03:42:19Z | # qwen2-vl-7b-instruct-realworldqa-lora-ep-2-waa-f
This repository contains the model checkpoint (original iteration 24) as epoch 2. |
jpark677/qwen2-vl-7b-instruct-realworldqa-lora-ep-1-waa-f | jpark677 | 2025-04-25T03:42:14Z | 0 | 0 | null | [
"safetensors",
"region:us"
]
| null | 2025-04-25T03:41:48Z | # qwen2-vl-7b-instruct-realworldqa-lora-ep-1-waa-f
This repository contains the model checkpoint (original iteration 12) as epoch 1. |
kjamesh/20250424_ppo_LLv2_T10_00M | kjamesh | 2025-04-25T03:41:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-25T02:33:29Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.75 +/- 12.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DangMinh21/code-search-net-tokenizer | DangMinh21 | 2025-04-25T03:40:40Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-25T03:40:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kokoutou/sn9_pretc4_2504_3 | Kokoutou | 2025-04-25T03:39:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:28:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/b2_science_fasttext_pos_expert_qa_3k | mlfoundations-dev | 2025-04-25T03:37:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T01:14:05Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_fasttext_pos_expert_qa_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_fasttext_pos_expert_qa_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_pos_expert_qa_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
facebook/PE-Core-L14-336-hf | facebook | 2025-04-25T03:36:06Z | 0 | 0 | perception-encoder | [
"perception-encoder",
"safetensors",
"arxiv:2504.13181",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-24T23:09:14Z | ---
license: apache-2.0
library_name: perception-encoder
---
# Model Details
[\[📃 Tech Report\]](https://arxiv.org/abs/2504.13181)
[\[📂 Github\]](https://github.com/facebookresearch/perception_models/)
Perception Encoder (PE) is a state-of-the-art encoder for image and video understanding trained via simple vision-language learning. It was introduced in "[Perception Encoder: The best visual embeddings
are not at the output of the network](https://ai.meta.com/research/publications/perception-encoder-the-best-visual-embeddings-are-not-at-the-output-of-the-network/)".
**Model Developer**: Meta
**Model Overview**: Perception Encoder (PE) is a family of large-scale vision encoder models with state-of-the-art performance on a large variety of vision tasks. By using a robust contrastive pretraining recipe and finetuning on synthetically aligned videos, PE not only outperforms all existing models on classification and retrieval, but it also internally produces strong, general features that scale for downstream tasks. PE unlocks the ability for large-scale contrastive pretraining to transfer to downstream tasks with alignment tuning to capitalize on those general features.
<img src="https://huggingface.co/facebook/PE-Core-G14-448/resolve/main/docs/pe_image1.png" style="width: 100%; margin: 0 auto; display: block;" />
## Perception Encoder: Core
PE core is our base model trained with our robust image pretraining schedule and finetuned on the data generated by our synthetic video data engine.
#### Model Configurations
PE core curently comes in 3 sizes. PE core G is the main checkpoint, with L and B models distilled from it.
| Scale | Tower | Params | Width | Depth | MLP | Heads | CLIP Dim | Resolution / Context Len |
|:-----:|:------:|:------:|:-----:|:-----:|:----:|:-----:|:--------:|:-------------------------:|
| **B/16** | Vision | 0.09B | 768 | 12 | 3072 | 12 | 1024 | 224px |
| | Text | 0.31B | 1024 | 24 | 4096 | 16 | 1024 | 32 tokens |
| **L/14** | Vision | 0.32B | 1024 | 24 | 4096 | 16 | 1024 | 336px |
| | Text | 0.31B | 1024 | 24 | 4096 | 16 | 1024 | 32 tokens |
| **G/14** | Vision | 1.88B | 1536 | 50 | 8960 | 16 | 1280 | 448px |
| | Text | 0.47B | 1280 | 24 | 5120 | 20 | 1280 | 72 tokens |
All PE core models use an attention pooling block with 8 heads on top of the vision tower. The L and B models _additionally_ have a class token for global aggregation. See the paper for more details.
#### Model Performance
PE core obtains extremely strong results across the board on zero-shot image classification and retrieval _as well as_ zero-shot video classification and retrieval. We present a sample of its performance across those domains below.
| Model | Checkpoint | IN-1k | IN-v2 | IN-A | ObjectNet | COCO-T2I | Kinetics-400 | VTT-T2I
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| **B/16** 224px | [PE-Core-B16-224](https://huggingface.co/facebook/PE-Core-B16-224) | 78.4 | 71.7 | 62.4 | 71.9 | 50.9 | 65.6 | 47.6 |
| **L/14** 336px | [PE-Core-L14-336](https://huggingface.co/facebook/PE-Core-L14-336) | 83.5 | 77.9 | 89.0 | 84.7 | 57.1 | 73.4 | 50.3 |
| **G/14** 448px | [PE-Core-G14-448](https://huggingface.co/facebook/PE-Core-G14-448) | 85.4 | 80.2 | 92.6 | 88.2 | 58.1 | 76.9 | 51.2 |
PE core performs particularly well on the _hard_ benchmarks such as ObjectNet and ImageNet-A.
# How to use
## Model loading code
We provide the model loading code in https://github.com/facebookresearch/perception_models
```shell
git clone https://github.com/facebookresearch/perception_models.git
cd perception_models
conda create --name perception_models python=3.12
conda activate perception_models
# Install PyTorch
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 xformers --index-url https://download.pytorch.org/whl/cu124
# We use torchcodec for decoding videos into PyTorch tensors
conda install ffmpeg -c conda-forge
pip install torchcodec==0.1 --index-url=https://download.pytorch.org/whl/cu124
pip install -e .
```
This will install an editable version of repo, allowing you to make changes to the code without needing to reinstall the package every time.
## Image and Text Feature extraction with a Trained Model
```python
import torch
from PIL import Image
import core.vision_encoder.pe as pe
import core.vision_encoder.transforms as transforms
print("CLIP configs:", pe.CLIP.available_configs())
# CLIP configs: ['PE-Core-G14-448', 'PE-Core-L14-336', 'PE-Core-B16-224']
model = pe.CLIP.from_config("PE-Core-B16-224", pretrained=True) # Downloads from HF
model = model.cuda()
preprocess = transforms.get_image_transform(model.image_size)
tokenizer = transforms.get_text_tokenizer(model.context_length)
image = preprocess(Image.open("docs/assets/cat.png")).unsqueeze(0).cuda()
text = tokenizer(["a diagram", "a dog", "a cat"]).cuda()
with torch.no_grad(), torch.autocast("cuda"):
image_features, text_features, logit_scale = model(image, text)
text_probs = (logit_scale * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[0.0, 0.0, 1.0]]
```
You can find more details in the GitHub repo.
# Citation
If you find our code useful for your research, please consider citing:
```
@article{bolya2025PerceptionEncoder,
title={Perception Encoder: The best visual embeddings are not at the output of the network},
author={Daniel Bolya and Po-Yao Huang and Peize Sun and Jang Hyun Cho and Andrea Madotto and Chen Wei and Tengyu Ma and Jiale Zhi and Jathushan Rajasegaran and Hanoona Rasheed and Junke Wang and Marco Monteiro and Hu Xu and Shiyu Dong and Nikhila Ravi and Daniel Li and Piotr Doll{\'a}r and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
@article{cho2025PerceptionLM,
title={PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding},
author={Jang Hyun Cho and Andrea Madotto and Effrosyni Mavroudi and Triantafyllos Afouras and Tushar Nagarajan and Muhammad Maaz and Yale Song and Tengyu Ma and Shuming Hu and Hanoona Rasheed and Peize Sun and Po-Yao Huang and Daniel Bolya and Suyog Jain and Miguel Martin and Huiyu Wang and Nikhila Ravi and Shashank Jain and Temmy Stark and Shane Moon and Babak Damavandi and Vivian Lee and Andrew Westbury and Salman Khan and Philipp Kr\"{a}henb\"{u}hl and Piotr Doll{\'a}r and Lorenzo Torresani and Kristen Grauman and Christoph Feichtenhofer},
journal={arXiv},
year={2025}
}
```
|
ddecentraptor/asf | ddecentraptor | 2025-04-25T03:33:28Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-2.0",
"region:us"
]
| text-to-image | 2025-04-25T03:33:23Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1745231866570.png_image.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: cc-by-2.0
---
# ASF
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/ddecentraptor/asf/tree/main) them in the Files & versions tab.
|
dgambettaphd/M_llm3_gen3_run0_X_doc1000_synt64_tot128_FRESH | dgambettaphd | 2025-04-25T03:30:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-25T03:29:46Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Namuun123/mn_sentencepiece_tokenizer | Namuun123 | 2025-04-25T03:28:02Z | 0 | 0 | null | [
"mongolian",
"tokenizer",
"sentencepiece",
"mn",
"license:mit",
"region:us"
]
| null | 2025-04-25T03:23:04Z | ---
language: mn
license: mit
tags:
- mongolian
- tokenizer
- sentencepiece
---
# SentencePiece Tokenizer
This repository contains a fine-tuned SentencePiece tokenizer on Mongolian text.
## Files
- `tokenizer_config.json`: The tokenizer configuration file
- `mn_tokenizer.model`: The SentencePiece model file
- `mn_tokenizer.vocab`: The SentencePiece vocabulary file
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Namuun123/mn_sentencepiece_tokenizer")
```
|
stain195/claassification938104 | stain195 | 2025-04-25T03:27:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-25T03:27:46Z | ---
license: apache-2.0
---
|
fax4ever/culturalitems-xlm-roberta-large | fax4ever | 2025-04-25T03:25:23Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-25T03:25:23Z | ---
license: apache-2.0
---
|
GreenNode/GreenMind-Medium-14B-R1 | GreenNode | 2025-04-25T03:25:21Z | 20 | 2 | null | [
"safetensors",
"qwen2",
"text2text-generation",
"vi",
"en",
"zh",
"id",
"th",
"arxiv:2504.16832",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:mit",
"region:us"
]
| text2text-generation | 2025-04-15T06:47:38Z | ---
license: mit
language:
- vi
- en
- zh
- id
- th
base_model:
- Qwen/Qwen2.5-14B-Instruct
pipeline_tag: text2text-generation
---
# GreenMind-Medium-14B-R1
We release **GreenMind-Medium-14B-R1**, a medium-sized Vietnamese language model capable of effectively addressing questions that require intermediate-level reasoning, such as general knowledge, mathematics, natural science and social science topics. By leveraging the Group Relative Policy Optimization strategy for fine-tuning, we guide the model to generate logically coherent responses.
## Model Description
- **Model Type:** Causal Language Models
- **Base Model:** [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- **Parameters:** 14.7B
- **Context Length:** Full 131,072 tokens and generation 8192 tokens
- **Language:** Vietnamese
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "GreenNode/GreenMind-Medium-14B-R1"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(
model_name,
revision='main',
trust_remote_code=False,
)
prompt = r"""Vừa gà vừa chó
Bó lại cho tròn
Ba mươi sáu con
Một trăm chân chẵn
Hỏi có bao nhiêu con gà, bao nhiêu con chó?"""
messages = [
{
"role": "system",
"content": "Bạn là một trợ lý ảo hữu ích trong việc trả lời câu hỏi. Hãy suy luận từng bước, và đưa ra đáp án trong thẻ <answer> </answer>."
},
{
"role": "user",
"content": f"{prompt} Hãy suy luận từng bước trong thẻ <think> </think>. Và trả về đáp án trong thẻ <answer> </answer>."
},
{
"role": "assistant",
"content": "Hãy để tôi giải quyết từng bước.\n<think>"
}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
continue_final_message=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
# Đầu tiên, chúng ta cần thiết lập hai phương trình dựa trên thông tin đề bài:
# 1. Tổng số con gà và chó là 36: x + y = 36
# 2. Tổng số chân là 100: 2x + 4y = 100
# Trong đó, x là số con gà và y là số con chó.
# Tiếp theo, chúng ta giải hệ phương trình này:
# Từ phương trình thứ nhất, ta có: x = 36 - y
# Thay vào phương trình thứ hai: 2(36 - y) + 4y = 100
# => 72 - 2y + 4y = 100
# => 2y = 28
# => y = 14 (số con chó)
# Thay y = 14 vào phương trình x + y = 36:
# => x = 36 - 14 = 22 (số con gà)
# Vậy, có 22 con gà và 14 con chó.
# </think>
# <answer>Có 22 con gà và 14 con chó.</answer>
```
## Evaluation
**Table 1. SeaExam Dataset.** GreenMind-Medium-14B-R1 compared to base model and some models with larger size.
| **Model** | **SeaExam-ID** | **SeaExam-TH** | **SeaExam-VI** | **Avg** |
|----------------------------------|----------------|----------------|----------------|----------|
| Meta-Llama-3.1-70B-Instruct | 65.8 | **70.6** | 72.6 | 69.7 |
| gemma3-27b-it | 64.4 | 67.5 | 73.1 | 68.4 |
| Qwen2.5-14B-Instruct | 67.6 | 68.8 | 73.1 | 69.8 |
| **GreenMind-Medium-14B-R1** | **74.36** | 69.75 | **74.44** | **72.79** |
**Table 2. VLSP 2023 Challenge:** The performance of our model outperforms most SOTA models.
| **Model** | **ComprehensionQA-vi ↑** | **Exams-vi ↑** | **LAMBADA-vi ↓** | **WikiQA-vi ↑** | **MMLU-vi ↑** |
|----------------------------------|---------------------------|----------------|------------------|-----------------|---------------|
| cpt-smartbot-13b | 0.6633 | 0.3473 | 21.9864 | 0.4455 | 0.414 |
| ura-llama-13b | 0.6556 | 0.342 | 17.5614 | 0.438 | 0.3973 |
| greennode-7b (prior work) | 0.6122 | 0.2892 | 189.7782 | 0.3335 | 0.387 |
| greennode-14b (prior work) | 0.6711 | 0.3672 | 29.5967 | 0.468 | 0.5281 |
| **GreenMind-Medium-14B-R1 (Ours)** | **0.8689** | **0.7796** | **10.7609** | **0.7915** | **0.7124** |
**Table 3. VMLU Dataset.** The performance compared to fine-tuned models.
| **Model** | **Access** | **STEM** | **Social Science** | **Humanities** | **Others** | **Avg** |
|----------------------------------|-----------|----------|---------------------|----------------|------------|----------|
| VNPTAI.IO-Medium-R1 | Private | 77.09 | 82.3 | 78.85 | 69.98 | 77.43 |
| MISA-Llama3-v1.1 | Private | 77.5 | 80.75 | 76.62 | 71.6 | 76.87 |
| BnK-AI-Medium-v2 | Private | 80.94 | 80.76 | 70.7 | 74.06 | 76.66 |
| VNPTAI.IO-Large-v4 | Private | 78.05 | 79.05 | 75.39 | 70.37 | 76.21 |
| GreenNode-xMedium-v1 | Private | 75.7 | 81.09 | 75.25 | 69.33 | 75.5 |
| **GreenMind-Medium-14B-R1 (Ours)** | Weight | 76.78 | 77.36 | 72.32 | 69.03 | 74.29 |
| CakebyVPBank-Large | Private | 77.75 | 78.11 | 70.38 | 67.82 | 73.99 |
| DeepSeek-R1-Distill-Llama-70B | Weight | 76.77 | 76.23 | 67.98 | 66.82 | 72.41 |
## Follow us
https://x.com/greennode23
## Support
https://discord.gg/B6MJFM3J3a
## License
This repository and the model weights are licensed under the [MIT License](LICENSE).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{tung2025greenmindnextgenerationvietnameselarge,
title={GreenMind: A Next-Generation Vietnamese Large Language Model for Structured and Logical Reasoning},
author={Luu Quy Tung and Hoang Quoc Viet and Vo Trong Thu},
year={2025},
eprint={2504.16832},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.16832},
}
```
## Contact Us
- General & Collaboration: [email protected], [email protected]
- Technical: [email protected] |
fax4ever/culturalitems-xlm-roberta-base | fax4ever | 2025-04-25T03:25:11Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-25T03:25:11Z | ---
license: apache-2.0
---
|
aslinguist/llama-lora-Paiwan-summarization | aslinguist | 2025-04-25T03:22:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
]
| null | 2025-04-25T03:11:05Z | ---
library_name: peft
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: llama-lora-Paiwan-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-lora-Paiwan-summarization
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 3.1561 |
| 3.4715 | 2.0 | 18 | 2.9315 |
| 2.886 | 3.0 | 27 | 2.7974 |
| 2.5873 | 4.0 | 36 | 2.7141 |
| 2.3277 | 5.0 | 45 | 2.6736 |
| 2.1141 | 6.0 | 54 | 2.6551 |
| 1.8659 | 7.0 | 63 | 2.7040 |
| 1.6632 | 8.0 | 72 | 2.7634 |
| 1.4296 | 9.0 | 81 | 2.8863 |
### Framework versions
- PEFT 0.15.0
- Transformers 4.51.2
- Pytorch 2.2.2+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1 |
aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF | aisingapore | 2025-04-25T03:21:52Z | 1,205 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"zh",
"vi",
"id",
"th",
"fil",
"ta",
"ms",
"km",
"lo",
"my",
"jv",
"su",
"arxiv:2504.05747",
"base_model:aisingapore/Llama-SEA-LION-v3-70B-IT",
"base_model:quantized:aisingapore/Llama-SEA-LION-v3-70B-IT",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-12-16T03:01:34Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model:
- aisingapore/Llama-SEA-LION-v3-70B-IT
base_model_relation: quantized
language:
- en
- zh
- vi
- id
- th
- fil
- ta
- ms
- km
- lo
- my
- jv
- su
license: llama3.1
---
<div>
<img src="llama_3.1_70b_sea-lion_v3_gguf_banner.png"/>
</div>
# Llama-SEA-LION-v3-70B-IT
[SEA-LION](https://arxiv.org/abs/2504.05747) is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama-SEA-LION-v3-70B-IT is a multilingual model that has been fine-tuned in two stages on approximately **12.3M English instruction-completion pairs** alongside a pool of **4.5M Southeast Asian instruction-completion pairs** from SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai, and Vietnamese.
SEA-LION stands for _Southeast Asian Languages In One Network_.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese
- **License:** [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)
## Description
This repo contains `GGUF` format model files for [aisingapore/Llama-SEA-LION-v3-70B-IT](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT).
#### Model Weights Included in this repository:
- [Llama-SEA-LION-v3-70B-IT-F16](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-F16-00001-of-00008.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q2_K](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q2_K.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q3_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q3_K_M.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q4_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q4_0.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q4_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q4_K_M.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q5_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q5_0.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q5_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q5_K_M.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q6_K](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q6_K-00001-of-00002.gguf)
- [Llama-SEA-LION-v3-70B-IT-Q8_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-70B-IT-GGUF/blob/main/Llama-SEA-LION-v3-70B-IT-Q8_0-00001-of-00003.gguf)
> [!NOTE]
> Take note that some GGUFs are split into parts. Most tools such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and those built on it do support split GGUFs, pointing the platform to the first split will be sufficient for it to function.
> In the event where a merge is necessary, it can be done using `llama.cpp`'s `gguf-split`: `./gguf-split --merge ./path/to/first-split ./path/to/output-gguf`
> More details: [gguf-split guide](https://github.com/ggerganov/llama.cpp/discussions/6404) & [README](https://github.com/ggerganov/llama.cpp/tree/master/examples/gguf-split)
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
Llama-SEA-LION-v3-70B-IT was tuned using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 3200 GPU hours, on a single node of 8x H100-80GB GPUs.
## Data
Llama-SEA-LION-v3-70B-IT was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Venkatadri Hulagadri Adithya, Yeo Yeow Tong, Yong Xianbin
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. |
Logikisto/Pulsar-50m | Logikisto | 2025-04-25T03:21:42Z | 0 | 0 | null | [
"mamba2",
"text2text-generation",
"fr",
"dataset:Logikisto/FR_2.4k_Q-R_5k",
"license:gpl-3.0",
"region:us"
]
| text2text-generation | 2025-04-21T18:59:20Z | ---
license: gpl-3.0
datasets:
- Logikisto/FR_2.4k_Q-R_5k
language:
- fr
pipeline_tag: text2text-generation
---
`Pulsar-50m` is a Mamba-2-based language model designed for conversational tasks in French, with a focus on learning and experimentation. This model is part of a series of three 50-million-parameter models developed to explore efficient training and inference on consumer-grade hardware.
It is intended for initial training and fine-tuning on French conversational datasets, with plans for further training on more generic French data in the future.
---
**Currently, research is being done on Mamba-2, training this model is not a priority.**
---
## Model Details
- **Architecture**: Mamba-2
- **Total Parameters**: 50,663,057
- **Core Model Parameters** (excluding embeddings and output head): 24,929,937
- Mamba-2 Blocks (48 blocks): 20,735,633 parameters
- Feed-Forward Networks (8 FFNs, every 6 blocks): 4,194,304 parameters
- **Embedding Parameters**: 12,866,560 (vocab_size × d_model)
- **Output Head Parameters**: Same as embedding, 12,866,560
- **Hyperparameters**:
- d_model : 256
- d_state : 64
- expand : 2
- Number of Blocks: 48
- Feed-Forward Networks: 8 (placed every 6 Mamba-2 blocks, 4x expansion) like Mamba-2.7b
- **Vocabulary**: [Cuctom GPT-2 tokenizer with 50,260 tokens](https://github.com/SyntaxError4Life/Structured_GPT-2_tokenizer)
- **Training Sequence Length**: 1,000 tokens
- **Inference Sequence Length**: Up to 2,000 tokens (no positional embedding so theoretically infinite)
- **Precision**: FP32 (no mixed precision)
## Hardware
The model is developed and trained on a personal server with the following specifications:
- **CPU**: AMD Ryzen 9 7950X (16 cores, 32 threads)
- **RAM**: 64 GB DDR5 (5600 MT/s)
- **GPU**: NVIDIA GeForce RTX 4080 (16 GB GDDR6X VRAM, CUDA 12.7, driver 565.77)
- **Libraries**:
- PyTorch 2.4
- `mamba_ssm` (for Mamba-2 blocks)
- `adam_mini` (optimizer with reduced gradient memory)
## Training Details
- **Objective**: Pre-training from scratch, followed by fine-tuning on conversational tasks
- **Training Strategy**: Multi-turn conversational learning, where each conversation is processed in cycles (`u1 -> a1`, then `u1 + a1 + u2 -> a2`, etc.)
- **Constraints**:
- Maximum 8-10 GB VRAM usage for training
- **Optimizer**: Adam-mini, leveraging reduced memory for gradients
- **Future Plans**: After initial training on the FR_2.4k_Q-R_5k dataset, the model will be further trained on larger, more generic French datasets to improve generalization.
## Availability
The model weights and training code will be made publicly available on Hugging Face upon completion of the initial training phase. Stay tuned for updates!
## Contact
For questions or collaboration, reach out via the Hugging Face model page or the associated repository.
---
*Note*: This model is a work in progress, developed for educational and experimental purposes. Contributions and feedback are welcome! |
LUcowork/e5_stage2 | LUcowork | 2025-04-25T03:19:36Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:128997",
"loss:MultipleNegativesRankingLoss",
"dataset:hobbang/stage2-dataset",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:suhwan3/e5-step1",
"base_model:finetune:suhwan3/e5-step1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-04-25T03:17:29Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:128997
- loss:MultipleNegativesRankingLoss
base_model: suhwan3/e5-step1
widget:
- source_sentence: The Global X S&P 500 Risk Managed Income ETF seeks to track the
Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets
in index securities. The index's strategy involves holding the underlying stocks
of the S&P 500 Index while applying an options collar, specifically selling at-the-money
covered call options and buying monthly 5% out-of-the-money put options corresponding
to the portfolio's value. This approach aims to generate income, ideally resulting
in a net credit from the options premiums, and provide risk management, though
selling at-the-money calls inherently caps the fund's potential for upside participation.
sentences:
- Nasdaq, Inc. operates as a technology company that serves capital markets and
other industries worldwide. The Market Technology segment includes anti financial
crime technology business, which offers Nasdaq Trade Surveillance, a SaaS solution
for brokers and other market participants to assist them in complying with market
rules, regulations, and internal market surveillance policies; Nasdaq Automated
Investigator, a cloud-deployed anti-money laundering tool; and Verafin, a SaaS
technology provider of anti-financial crime management solutions. This segment
also handles assets, such as cash equities, equity derivatives, currencies, interest-bearing
securities, commodities, energy products, and digital currencies. The Investment
Intelligence segment sells and distributes historical and real-time market data;
develops and licenses Nasdaq-branded indexes and financial products; and provides
investment insights and workflow solutions. The Corporate Platforms segment operates
listing platforms; and offers investor relations intelligence and governance solutions.
As of December 31, 2021, it had 4,178 companies listed securities on The Nasdaq
Stock Market, including 1,632 listings on The Nasdaq Global Select Market; 1,169
on The Nasdaq Global Market; and 1,377 on The Nasdaq Capital Market. The Market
Services segment includes equity derivative trading and clearing, cash equity
trading, fixed income and commodities trading and clearing, and trade management
service businesses. This segment operates various exchanges and other marketplace
facilities across various asset classes, which include derivatives, commodities,
cash equity, debt, structured products, and exchange traded products; and provides
broker, clearing, settlement, and central depository services. The company was
formerly known as The NASDAQ OMX Group, Inc. and changed its name to Nasdaq, Inc.
in September 2015. Nasdaq, Inc. was founded in 1971 and is headquartered in New
York, New York.
- Jabil Inc. provides manufacturing services and solutions worldwide. The company
operates in two segments, Electronics Manufacturing Services and Diversified Manufacturing
Services. It offers electronics design, production, and product management services.
The company provides electronic design services, such as application-specific
integrated circuit design, firmware development, and rapid prototyping services;
and designs plastic and metal enclosures that include the electro-mechanics, such
as the printed circuit board assemblies (PCBA). It also specializes in the three-dimensional
mechanical design comprising the analysis of electronic, electro-mechanical, and
optical assemblies, as well as offers various industrial design, mechanism development,
and tooling management services. In addition, the company provides computer-assisted
design services consisting of PCBA design, as well as PCBA design validation and
verification services; and other consulting services, such as the generation of
a bill of materials, approved vendor list, and assembly equipment configuration
for various PCBA designs. Further, it offers product and process validation services,
such as product system, product safety, regulatory compliance, and reliability
tests, as well as manufacturing test solution development services. Additionally,
the company provides systems assembly, test, direct-order fulfillment, and configure-to-order
services. It serves 5G, wireless and cloud, digital print and retail, industrial
and semi-cap, networking and storage, automotive and transportation, connected
devices, healthcare and packaging, and mobility industries. The company was formerly
known as Jabil Circuit, Inc. and changed its name to Jabil Inc. in June 2017.
Jabil Inc. was founded in 1966 and is headquartered in Saint Petersburg, Florida.
- 'Realty Income, The Monthly Dividend Company, is an S&P 500 company dedicated
to providing stockholders with dependable monthly income. The company is structured
as a REIT, and its monthly dividends are supported by the cash flow from over
6,500 real estate properties owned under long-term lease agreements with our commercial
clients. To date, the company has declared 608 consecutive common stock monthly
dividends throughout its 52-year operating history and increased the dividend
109 times since Realty Income''s public listing in 1994 (NYSE: O). The company
is a member of the S&P 500 Dividend Aristocrats index. Additional information
about the company can be obtained from the corporate website at www.realtyincome.com.'
- source_sentence: The iShares U.S. Telecommunications ETF (IYZ) seeks to track the
investment results of the Russell 1000 Telecommunications RIC 22.5/45 Capped Index,
which measures the performance of the U.S. telecommunications sector of the U.S.
equity market as defined by FTSE Russell. This market-cap-weighted index includes
large-cap companies involved in telecom equipment and service provision and is
subject to regulatory capping that limits single holdings to 22.5% and aggregate
large holdings to 45%. The fund generally invests at least 80% of its assets in
the component securities of its underlying index and is non-diversified; the underlying
index is rebalanced quarterly.
sentences:
- Kanzhun Limited operates an online recruitment platform, BOSS Zhipin in the People's
Republic of China. Its recruitment platform assists the recruitment process between
job seekers and employers for enterprises, and corporations. The company was founded
in 2013 and is headquartered in Beijing, the People's Republic of China.
- Frontier Communications Parent, Inc., together with its subsidiaries, provides
communications services for consumer and business customers in 25 states in the
United States. It offers data and Internet, voice, video, and other services.
The company was formerly known as Frontier Communications Corporation and changed
its name to Frontier Communications Parent, Inc. in April 2021. Frontier Communications
Parent, Inc. was incorporated in 1935 and is based in Norwalk, Connecticut.
- Broadcom Inc. designs, develops, and supplies various semiconductor devices with
a focus on complex digital and mixed signal complementary metal oxide semiconductor
based devices and analog III-V based products worldwide. The company operates
in two segments, Semiconductor Solutions and Infrastructure Software. It provides
set-top box system-on-chips (SoCs); cable, digital subscriber line, and passive
optical networking central office/consumer premise equipment SoCs; wireless local
area network access point SoCs; Ethernet switching and routing merchant silicon
products; embedded processors and controllers; serializer/deserializer application
specific integrated circuits; optical and copper, and physical layers; and fiber
optic transmitter and receiver components. The company also offers RF front end
modules, filters, and power amplifiers; Wi-Fi, Bluetooth, and global positioning
system/global navigation satellite system SoCs; custom touch controllers; serial
attached small computer system interface, and redundant array of independent disks
controllers and adapters; peripheral component interconnect express switches;
fiber channel host bus adapters; read channel based SoCs; custom flash controllers;
preamplifiers; and optocouplers, industrial fiber optics, and motion control encoders
and subsystems. Its products are used in various applications, including enterprise
and data center networking, home connectivity, set-top boxes, broadband access,
telecommunication equipment, smartphones and base stations, data center servers
and storage systems, factory automation, power generation and alternative energy
systems, and electronic displays. Broadcom Inc. was incorporated in 2018 and is
headquartered in San Jose, California.
- source_sentence: The Xtrackers MSCI Emerging Markets ESG Leaders Equity ETF tracks
an index of large- and mid-cap emerging market stocks that emphasize strong environmental,
social, and governance (ESG) characteristics. The index first excludes companies
involved in specific controversial industries. From the remaining universe, it
ranks stocks based on MSCI ESG scores, including a controversy component, to identify
and select the highest-ranking ESG leaders, effectively screening out ESG laggards.
To maintain market-like country and sector weights, the index selects the top
ESG-scoring stocks within each sector until a specified market capitalization
threshold is reached. Selected stocks are then weighted by market capitalization
within their respective sectors. The fund typically invests over 80% of its assets
in the securities of this underlying index.
sentences:
- Info Edge (India) Limited operates as an online classifieds company in the areas
of recruitment, matrimony, real estate, and education and related services in
India and internationally. It operates through Recruitment Solutions, 99acres,
and Other segments. The company offers recruitment services through naukri.com,
an online job website for job seekers and corporate customers, including hiring
consultants; firstnaukri.com, a job search network for college students and recent
graduates; naukrigulf.com, a website catering to Gulf markets; and quadranglesearch.com,
a site that provides off-line placement services to middle and senior management,
as well as Highorbit/iimjobs.com, zwayam.com, hirist.com, doselect.com, ambitionbox.com,
bigshyft.com, and jobhai.com. It also provides 99acres.com, which offers listing
of properties for sale, purchase, and rent; Jeevansathi.com, an online matrimonial
classifieds services; and shiksha.com, an education classified website that helps
students to decide their undergraduate and postgraduate options by providing useful
information on careers, exams, colleges, and courses, as well as operates multiple
dating platforms on the web through its mobile apps Aisle, Anbe, Arike and HeyDil.
In addition, the company provides internet, computer, and electronic and related
services; and software development, consultancy, technical support for consumer
companies, SAAS providers, and other services in the field of information technology
and product development, as well as brokerage services in the real estate sector.
Further, it acts as an investment adviser and manager, financial and management
consultant, and sponsor of alternative investment funds, as well as provides advertising
space for colleges and universities on www.shiksha.com. Info Edge (India) Limited
was incorporated in 1995 and is based in Noida, India.
- China Overseas Land & Investment Limited, an investment holding company, engages
in the property development and investment, and other operations in the People's
Republic of China and the United Kingdom. The company operates through Property
Development, Property Investment, and Other Operations segments. It is involved
in the investment, development, and rental of residential and commercial properties;
issuance of guaranteed notes and corporate bonds; and hotel operation activities.
The company also provides construction and building design consultancy services.
In addition, it engages in the investment and financing, land consolidation, regional
planning, engineering construction, industrial import, commercial operation, and
property management. Further, the company offers urban services, including office
buildings, flexible working space, shopping malls, star-rated hotels, long-term
rental apartments, logistics parks, and architectural design and construction.
The company was founded in 1979 and is based in Central, Hong Kong. China Overseas
Land & Investment Limited is a subsidiary of China Overseas Holdings Limited.
- Mastercard Incorporated, a technology company, provides transaction processing
and other payment-related products and services in the United States and internationally.
It facilitates the processing of payment transactions, including authorization,
clearing, and settlement, as well as delivers other payment-related products and
services. The company offers integrated products and value-added services for
account holders, merchants, financial institutions, businesses, governments, and
other organizations, such as programs that enable issuers to provide consumers
with credits to defer payments; prepaid programs and management services; commercial
credit and debit payment products and solutions; and payment products and solutions
that allow its customers to access funds in deposit and other accounts. It also
provides value-added products and services comprising cyber and intelligence solutions
for parties to transact, as well as proprietary insights, drawing on principled
use of consumer, and merchant data services. In addition, the company offers analytics,
test and learn, consulting, managed services, loyalty, processing, and payment
gateway solutions for e-commerce merchants. Further, it provides open banking
and digital identity platforms services. The company offers payment solutions
and services under the MasterCard, Maestro, and Cirrus. Mastercard Incorporated
was founded in 1966 and is headquartered in Purchase, New York.
- source_sentence: The Global X S&P 500 Risk Managed Income ETF seeks to track the
Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets
in index securities. The index's strategy involves holding the underlying stocks
of the S&P 500 Index while applying an options collar, specifically selling at-the-money
covered call options and buying monthly 5% out-of-the-money put options corresponding
to the portfolio's value. This approach aims to generate income, ideally resulting
in a net credit from the options premiums, and provide risk management, though
selling at-the-money calls inherently caps the fund's potential for upside participation.
sentences:
- Incyte Corporation, a biopharmaceutical company, focuses on the discovery, development,
and commercialization of proprietary therapeutics in the United States and internationally.
The company offers JAKAFI, a drug for the treatment of myelofibrosis and polycythemia
vera; PEMAZYRE, a fibroblast growth factor receptor kinase inhibitor that act
as oncogenic drivers in various liquid and solid tumor types; and ICLUSIG, a kinase
inhibitor to treat chronic myeloid leukemia and philadelphia-chromosome positive
acute lymphoblastic leukemia. Its clinical stage products include ruxolitinib,
a steroid-refractory chronic graft-versus-host-diseases (GVHD); itacitinib, which
is in Phase II/III clinical trial to treat naive chronic GVHD; and pemigatinib
for treating bladder cancer, cholangiocarcinoma, myeloproliferative syndrome,
and tumor agnostic. In addition, the company engages in developing Parsaclisib,
which is in Phase II clinical trial for follicular lymphoma, marginal zone lymphoma,
and mantel cell lymphoma. Additionally, it develops Retifanlimab that is in Phase
II clinical trials for MSI-high endometrial cancer, merkel cell carcinoma, and
anal cancer, as well as in Phase II clinical trials for patients with non-small
cell lung cancer. It has collaboration agreements with Novartis International
Pharmaceutical Ltd.; Eli Lilly and Company; Agenus Inc.; Calithera Biosciences,
Inc; MacroGenics, Inc.; Merus N.V.; Syros Pharmaceuticals, Inc.; Innovent Biologics,
Inc.; Zai Lab Limited; Cellenkos, Inc.; and Nimble Therapeutics, as well as clinical
collaborations with MorphoSys AG and Xencor, Inc. to investigate the combination
of tafasitamab, plamotamab, and lenalidomide in patients with relapsed or refractory
diffuse large B-cell lymphoma, and relapsed or refractory follicular lymphoma.
The company was incorporated in 1991 and is headquartered in Wilmington, Delaware.
- Omnicom Group Inc., together with its subsidiaries, provides advertising, marketing,
and corporate communications services. It provides a range of services in the
areas of advertising, customer relationship management, public relations, and
healthcare. The company's services include advertising, branding, content marketing,
corporate social responsibility consulting, crisis communications, custom publishing,
data analytics, database management, digital/direct marketing, digital transformation,
entertainment marketing, experiential marketing, field marketing, financial/corporate
business-to-business advertising, graphic arts/digital imaging, healthcare marketing
and communications, and in-store design services. Its services also comprise interactive
marketing, investor relations, marketing research, media planning and buying,
merchandising and point of sale, mobile marketing, multi-cultural marketing, non-profit
marketing, organizational communications, package design, product placement, promotional
marketing, public affairs, retail marketing, sales support, search engine marketing,
shopper marketing, social media marketing, and sports and event marketing services.
It operates in the United States, Canada, Puerto Rico, South America, Mexico,
Europe, the Middle East, Africa, Australia, Greater China, India, Japan, Korea,
New Zealand, Singapore, and other Asian countries. The company was incorporated
in 1944 and is based in New York, New York.
- NetApp, Inc. provides cloud-led and data-centric services to manage and share
data on-premises, and private and public clouds worldwide. It operates in two
segments, Hybrid Cloud and Public Could. The company offers intelligent data management
software, such as NetApp ONTAP, NetApp Snapshot, NetApp SnapCenter Backup Management,
NetApp SnapMirror Data Replication, NetApp SnapLock Data Compliance, NetApp ElementOS
software, and NetApp SANtricity software; and storage infrastructure solutions,
including NetApp All-Flash FAS series, NetApp Fabric Attached Storage, NetApp
FlexPod, NetApp E/EF series, NetApp StorageGRID, and NetApp SolidFire. It also
provides cloud storage and data services comprising NetApp Cloud Volumes ONTAP,
Azure NetApp Files, Amazon FSx for NetApp ONTAP, NetApp Cloud Volumes Service
for Google Cloud, NetApp Cloud Sync, NetApp Cloud Tiering, NetApp Cloud Backup,
NetApp Cloud Data Sense, and NetApp Cloud Volumes Edge Cache; and cloud operations
services, such as NetApp Cloud Insights, Spot Ocean Kubernetes Suite, Spot Security,
Spot Eco, and Spot CloudCheckr. In addition, the company offers application-aware
data management service under the NetApp Astra name; and professional and support
services, such as strategic consulting, professional, managed, and support services.
Further, it provides assessment, design, implementation, and migration services.
The company serves the energy, financial service, government, technology, internet,
life science, healthcare service, manufacturing, media, entertainment, animation,
video postproduction, and telecommunication markets through a direct sales force
and an ecosystem of partners. NetApp, Inc. was incorporated in 1992 and is headquartered
in San Jose, California.
- source_sentence: The Global X S&P 500 Risk Managed Income ETF seeks to track the
Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets
in index securities. The index's strategy involves holding the underlying stocks
of the S&P 500 Index while applying an options collar, specifically selling at-the-money
covered call options and buying monthly 5% out-of-the-money put options corresponding
to the portfolio's value. This approach aims to generate income, ideally resulting
in a net credit from the options premiums, and provide risk management, though
selling at-the-money calls inherently caps the fund's potential for upside participation.
sentences:
- Walgreens Boots Alliance, Inc. operates as a pharmacy-led health and beauty retail
company. It operates through two segments, the United States and International.
The United States segment sells prescription drugs and an assortment of retail
products, including health, wellness, beauty, personal care, consumable, and general
merchandise products through its retail drugstores. It also provides central specialty
pharmacy services and mail services. As of August 31, 2021, this segment operated
8,965 retail stores under the Walgreens and Duane Reade brands in the United States;
and five specialty pharmacies. The International segment sells prescription drugs;
and health and wellness, beauty, personal care, and other consumer products through
its pharmacy-led health and beauty retail stores and optical practices, as well
as through boots.com and an integrated mobile application. It also engages in
pharmaceutical wholesaling and distribution business in Germany. As of August
31, 2021, this segment operated 4,031 retail stores under the Boots, Benavides,
and Ahumada in the United Kingdom, Thailand, Norway, the Republic of Ireland,
the Netherlands, Mexico, and Chile; and 548 optical practices, including 160 on
a franchise basis. Walgreens Boots Alliance, Inc. was founded in 1901 and is based
in Deerfield, Illinois.
- Middlesex Water Company owns and operates regulated water utility and wastewater
systems. It operates in two segments, Regulated and Non-Regulated. The Regulated
segment collects, treats, and distributes water on a retail and wholesale basis
to residential, commercial, industrial, and fire protection customers, as well
as provides regulated wastewater systems in New Jersey and Delaware. The Non-Regulated
segment provides non-regulated contract services for the operation and maintenance
of municipal and private water and wastewater systems in New Jersey and Delaware.
The company was incorporated in 1896 and is headquartered in Iselin, New Jersey.
- Liberty Broadband Corporation engages in the communications businesses. It operates
through GCI Holdings and Charter segments. The GCI Holdings segment provides a
range of wireless, data, video, voice, and managed services to residential customers,
businesses, governmental entities, and educational and medical institutions primarily
in Alaska under the GCI brand. The Charter segment offers subscription-based video
services comprising video on demand, high-definition television, and digital video
recorder service; local and long-distance calling, voicemail, call waiting, caller
ID, call forwarding, and other voice services, as well as international calling
services; and Spectrum TV. It also provides internet services, including an in-home
Wi-Fi product that provides customers with high-performance wireless routers and
managed Wi-Fi services; advanced community Wi-Fi; mobile internet; and a security
suite that offers protection against computer viruses and spyware. In addition,
this segment offers internet access, data networking, fiber connectivity to cellular
towers and office buildings, video entertainment, and business telephone services;
advertising services on cable television networks and digital outlets; and operates
regional sports and news networks. Liberty Broadband Corporation was incorporated
in 2014 and is based in Englewood, Colorado.
datasets:
- hobbang/stage2-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on suhwan3/e5-step1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [suhwan3/e5-step1](https://huggingface.co/suhwan3/e5-step1) on the [stage2-dataset](https://huggingface.co/datasets/hobbang/stage2-dataset) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [suhwan3/e5-step1](https://huggingface.co/suhwan3/e5-step1) <!-- at revision 9208a43bc7f1394fe52e954e6a6661be1c113ebc -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [stage2-dataset](https://huggingface.co/datasets/hobbang/stage2-dataset)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.",
'Walgreens Boots Alliance, Inc. operates as a pharmacy-led health and beauty retail company. It operates through two segments, the United States and International. The United States segment sells prescription drugs and an assortment of retail products, including health, wellness, beauty, personal care, consumable, and general merchandise products through its retail drugstores. It also provides central specialty pharmacy services and mail services. As of August 31, 2021, this segment operated 8,965 retail stores under the Walgreens and Duane Reade brands in the United States; and five specialty pharmacies. The International segment sells prescription drugs; and health and wellness, beauty, personal care, and other consumer products through its pharmacy-led health and beauty retail stores and optical practices, as well as through boots.com and an integrated mobile application. It also engages in pharmaceutical wholesaling and distribution business in Germany. As of August 31, 2021, this segment operated 4,031 retail stores under the Boots, Benavides, and Ahumada in the United Kingdom, Thailand, Norway, the Republic of Ireland, the Netherlands, Mexico, and Chile; and 548 optical practices, including 160 on a franchise basis. Walgreens Boots Alliance, Inc. was founded in 1901 and is based in Deerfield, Illinois.',
'Liberty Broadband Corporation engages in the communications businesses. It operates through GCI Holdings and Charter segments. The GCI Holdings segment provides a range of wireless, data, video, voice, and managed services to residential customers, businesses, governmental entities, and educational and medical institutions primarily in Alaska under the GCI brand. The Charter segment offers subscription-based video services comprising video on demand, high-definition television, and digital video recorder service; local and long-distance calling, voicemail, call waiting, caller ID, call forwarding, and other voice services, as well as international calling services; and Spectrum TV. It also provides internet services, including an in-home Wi-Fi product that provides customers with high-performance wireless routers and managed Wi-Fi services; advanced community Wi-Fi; mobile internet; and a security suite that offers protection against computer viruses and spyware. In addition, this segment offers internet access, data networking, fiber connectivity to cellular towers and office buildings, video entertainment, and business telephone services; advertising services on cable television networks and digital outlets; and operates regional sports and news networks. Liberty Broadband Corporation was incorporated in 2014 and is based in Englewood, Colorado.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### stage2-dataset
* Dataset: [stage2-dataset](https://huggingface.co/datasets/hobbang/stage2-dataset) at [cd393c2](https://huggingface.co/datasets/hobbang/stage2-dataset/tree/cd393c24f4017971e95aa6f73736f2fcb45e30a0)
* Size: 128,997 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 117 tokens</li><li>mean: 166.66 tokens</li><li>max: 210 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 280.1 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>JPMorgan Chase & Co. operates as a financial services company worldwide. It operates through four segments: Consumer & Community Banking (CCB), Corporate & Investment Bank (CIB), Commercial Banking (CB), and Asset & Wealth Management (AWM). The CCB segment offers s deposit, investment and lending products, payments, and services to consumers; lending, deposit, and cash management and payment solutions to small businesses; mortgage origination and servicing activities; residential mortgages and home equity loans; and credit card, auto loan, and leasing services. The CIB segment provides investment banking products and services, including corporate strategy and structure advisory, and equity and debt markets capital-raising services, as well as loan origination and syndication; payments and cross-border financing; and cash and derivative instruments, risk management solutions, prime brokerage, and research. This segment also offers securities services, including custody, fund accounting ...</code> |
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>JPMorgan Chase & Co. operates as a financial services company worldwide. It operates through four segments: Consumer & Community Banking (CCB), Corporate & Investment Bank (CIB), Commercial Banking (CB), and Asset & Wealth Management (AWM). The CCB segment offers s deposit, investment and lending products, payments, and services to consumers; lending, deposit, and cash management and payment solutions to small businesses; mortgage origination and servicing activities; residential mortgages and home equity loans; and credit card, auto loan, and leasing services. The CIB segment provides investment banking products and services, including corporate strategy and structure advisory, and equity and debt markets capital-raising services, as well as loan origination and syndication; payments and cross-border financing; and cash and derivative instruments, risk management solutions, prime brokerage, and research. This segment also offers securities services, including custody, fund accounting ...</code> |
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>The Allstate Corporation, together with its subsidiaries, provides property and casualty, and other insurance products in the United States and Canada. The company operates through Allstate Protection; Protection Services; Allstate Health and Benefits; and Run-off Property-Liability segments. The Allstate Protection segment offers private passenger auto and homeowners insurance; other personal lines products; and commercial lines products under the Allstate and Encompass brand names. The Protection Services segment provides consumer product protection plans and related technical support for mobile phones, consumer electronics, furniture, and appliances; finance and insurance products, including vehicle service contracts, guaranteed asset protection waivers, road hazard tire and wheel, and paint and fabric protection; towing, jump-start, lockout, fuel delivery, and tire change services; device and mobile data collection services; data and analytic solutions using automotive telematics i...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### stage2-dataset
* Dataset: [stage2-dataset](https://huggingface.co/datasets/hobbang/stage2-dataset) at [cd393c2](https://huggingface.co/datasets/hobbang/stage2-dataset/tree/cd393c24f4017971e95aa6f73736f2fcb45e30a0)
* Size: 16,944 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 161 tokens</li><li>mean: 176.19 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 294.34 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>Apple Inc. designs, manufactures, and markets smartphones, personal computers, tablets, wearables, and accessories worldwide. The company offers iPhone, a line of smartphones; Mac, a line of personal computers; iPad, a line of multi-purpose tablets; and wearables, home, and accessories comprising AirPods, Apple TV, Apple Watch, Beats products, and HomePod. It also provides AppleCare support and cloud services; and operates various platforms, including the App Store that allow customers to discover and download applications and digital content, such as books, music, video, games, and podcasts, as well as advertising services include third-party licensing arrangements and its own advertising platforms. In addition, the company offers various subscription-based services, such as Apple Arcade, a game subscription service; Apple Fitness+, a personalized fitness service; Apple Music, which offers users a curated listening experience with on-demand radio stations; Apple News+, a subscription ...</code> |
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>Microsoft Corporation develops, licenses, and supports software, services, devices, and solutions worldwide. The company operates in three segments: Productivity and Business Processes, Intelligent Cloud, and More Personal Computing. The Productivity and Business Processes segment offers Office, Exchange, SharePoint, Microsoft Teams, Office 365 Security and Compliance, Microsoft Viva, and Skype for Business; Skype, Outlook.com, OneDrive, and LinkedIn; and Dynamics 365, a set of cloud-based and on-premises business solutions for organizations and enterprise divisions. The Intelligent Cloud segment licenses SQL, Windows Servers, Visual Studio, System Center, and related Client Access Licenses; GitHub that provides a collaboration platform and code hosting service for developers; Nuance provides healthcare and enterprise AI solutions; and Azure, a cloud platform. It also offers enterprise support, Microsoft consulting, and nuance professional services to assist customers in developing, de...</code> |
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>NVIDIA Corporation provides graphics, and compute and networking solutions in the United States, Taiwan, China, and internationally. The company's Graphics segment offers GeForce GPUs for gaming and PCs, the GeForce NOW game streaming service and related infrastructure, and solutions for gaming platforms; Quadro/NVIDIA RTX GPUs for enterprise workstation graphics; vGPU software for cloud-based visual and virtual computing; automotive platforms for infotainment systems; and Omniverse software for building 3D designs and virtual worlds. Its Compute & Networking segment provides Data Center platforms and systems for AI, HPC, and accelerated computing; Mellanox networking and interconnect solutions; automotive AI Cockpit, autonomous driving development agreements, and autonomous vehicle solutions; cryptocurrency mining processors; Jetson for robotics and other embedded platforms; and NVIDIA AI Enterprise and other software. The company's products are used in gaming, professional visualizat...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `bf16`: True
- `dataloader_drop_last`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:--------:|:-------------:|:---------------:|
| 0.0025 | 10 | 3.2434 | - |
| 0.0050 | 20 | 3.1529 | - |
| 0.0074 | 30 | 3.1541 | - |
| 0.0099 | 40 | 3.1721 | - |
| 0.0124 | 50 | 2.8615 | - |
| 0.0149 | 60 | 2.7943 | - |
| 0.0174 | 70 | 2.8572 | - |
| 0.0198 | 80 | 2.8025 | - |
| 0.0223 | 90 | 2.7688 | - |
| 0.0248 | 100 | 2.7029 | - |
| 0.0273 | 110 | 2.6609 | - |
| 0.0298 | 120 | 2.6807 | - |
| 0.0323 | 130 | 2.5567 | - |
| 0.0347 | 140 | 2.6335 | - |
| 0.0372 | 150 | 2.6509 | - |
| 0.0397 | 160 | 2.6173 | - |
| 0.0422 | 170 | 2.5776 | - |
| 0.0447 | 180 | 2.6556 | - |
| 0.0471 | 190 | 2.5436 | - |
| 0.0496 | 200 | 2.6695 | - |
| 0.0521 | 210 | 2.6238 | - |
| 0.0546 | 220 | 2.5281 | - |
| 0.0571 | 230 | 2.5471 | - |
| 0.0595 | 240 | 2.5133 | - |
| 0.0620 | 250 | 2.515 | - |
| 0.0645 | 260 | 2.549 | - |
| 0.0670 | 270 | 2.4789 | - |
| 0.0695 | 280 | 2.529 | - |
| 0.0719 | 290 | 2.4778 | - |
| 0.0744 | 300 | 2.6365 | - |
| 0.0769 | 310 | 2.4869 | - |
| 0.0794 | 320 | 2.4804 | - |
| 0.0819 | 330 | 2.6349 | - |
| 0.0843 | 340 | 2.5421 | - |
| 0.0868 | 350 | 2.6261 | - |
| 0.0893 | 360 | 2.4998 | - |
| 0.0918 | 370 | 2.4604 | - |
| 0.0943 | 380 | 2.4391 | - |
| 0.0968 | 390 | 2.4586 | - |
| 0.0992 | 400 | 2.363 | - |
| 0.1017 | 410 | 2.4781 | - |
| 0.1042 | 420 | 2.3992 | - |
| 0.1067 | 430 | 2.5011 | - |
| 0.1092 | 440 | 2.4925 | - |
| 0.1116 | 450 | 2.4634 | - |
| 0.1141 | 460 | 2.374 | - |
| 0.1166 | 470 | 2.47 | - |
| 0.1191 | 480 | 2.3879 | - |
| 0.1216 | 490 | 2.4724 | - |
| 0.1240 | 500 | 2.3785 | - |
| 0.1265 | 510 | 2.465 | - |
| 0.1290 | 520 | 2.4031 | - |
| 0.1315 | 530 | 2.479 | - |
| 0.1340 | 540 | 2.3908 | - |
| 0.1364 | 550 | 2.424 | - |
| 0.1389 | 560 | 2.5066 | - |
| 0.1414 | 570 | 2.4195 | - |
| 0.1439 | 580 | 2.3403 | - |
| 0.1464 | 590 | 2.4056 | - |
| 0.1488 | 600 | 2.5169 | - |
| 0.1513 | 610 | 2.3982 | - |
| 0.1538 | 620 | 2.3388 | - |
| 0.1563 | 630 | 2.3661 | - |
| 0.1588 | 640 | 2.3944 | - |
| 0.1613 | 650 | 2.4447 | - |
| 0.1637 | 660 | 2.3494 | - |
| 0.1662 | 670 | 2.4022 | - |
| 0.1687 | 680 | 2.4189 | - |
| 0.1712 | 690 | 2.5578 | - |
| 0.1737 | 700 | 2.3257 | - |
| 0.1761 | 710 | 2.3886 | - |
| 0.1786 | 720 | 2.4123 | - |
| 0.1811 | 730 | 2.3356 | - |
| 0.1836 | 740 | 2.3251 | - |
| 0.1861 | 750 | 2.3763 | - |
| 0.1885 | 760 | 2.3461 | - |
| 0.1910 | 770 | 2.3906 | - |
| 0.1935 | 780 | 2.3079 | - |
| 0.1960 | 790 | 2.3625 | - |
| 0.1985 | 800 | 2.407 | - |
| 0.2009 | 810 | 2.4349 | - |
| 0.2034 | 820 | 2.6694 | - |
| 0.2059 | 830 | 2.4116 | - |
| 0.2084 | 840 | 2.3552 | - |
| 0.2109 | 850 | 2.4232 | - |
| 0.2133 | 860 | 2.455 | - |
| 0.2158 | 870 | 2.331 | - |
| 0.2183 | 880 | 2.3231 | - |
| 0.2208 | 890 | 2.3441 | - |
| 0.2233 | 900 | 2.2612 | - |
| 0.2258 | 910 | 2.2744 | - |
| 0.2282 | 920 | 2.2202 | - |
| 0.2307 | 930 | 2.3144 | - |
| 0.2332 | 940 | 2.2821 | - |
| 0.2357 | 950 | 2.3194 | - |
| 0.2382 | 960 | 2.4394 | - |
| 0.2406 | 970 | 2.1918 | - |
| 0.2431 | 980 | 2.3256 | - |
| 0.2456 | 990 | 2.3285 | - |
| 0.2481 | 1000 | 2.3288 | 1.9891 |
| 0.2506 | 1010 | 2.3462 | - |
| 0.2530 | 1020 | 2.3088 | - |
| 0.2555 | 1030 | 2.215 | - |
| 0.2580 | 1040 | 2.3241 | - |
| 0.2605 | 1050 | 2.2073 | - |
| 0.2630 | 1060 | 2.1959 | - |
| 0.2654 | 1070 | 2.37 | - |
| 0.2679 | 1080 | 2.3663 | - |
| 0.2704 | 1090 | 2.2008 | - |
| 0.2729 | 1100 | 2.3766 | - |
| 0.2754 | 1110 | 2.3042 | - |
| 0.2778 | 1120 | 2.2124 | - |
| 0.2803 | 1130 | 2.1839 | - |
| 0.2828 | 1140 | 2.2635 | - |
| 0.2853 | 1150 | 2.2726 | - |
| 0.2878 | 1160 | 2.3131 | - |
| 0.2903 | 1170 | 2.2244 | - |
| 0.2927 | 1180 | 2.2071 | - |
| 0.2952 | 1190 | 2.2722 | - |
| 0.2977 | 1200 | 2.2883 | - |
| 0.3002 | 1210 | 2.2805 | - |
| 0.3027 | 1220 | 2.268 | - |
| 0.3051 | 1230 | 2.2111 | - |
| 0.3076 | 1240 | 2.2381 | - |
| 0.3101 | 1250 | 2.3316 | - |
| 0.3126 | 1260 | 2.2579 | - |
| 0.3151 | 1270 | 2.3303 | - |
| 0.3175 | 1280 | 2.1496 | - |
| 0.3200 | 1290 | 2.2816 | - |
| 0.3225 | 1300 | 2.2676 | - |
| 0.3250 | 1310 | 2.4031 | - |
| 0.3275 | 1320 | 2.2962 | - |
| 0.3299 | 1330 | 2.357 | - |
| 0.3324 | 1340 | 2.1618 | - |
| 0.3349 | 1350 | 2.2292 | - |
| 0.3374 | 1360 | 2.3064 | - |
| 0.3399 | 1370 | 2.2085 | - |
| 0.3423 | 1380 | 2.3681 | - |
| 0.3448 | 1390 | 2.185 | - |
| 0.3473 | 1400 | 2.2346 | - |
| 0.3498 | 1410 | 2.3735 | - |
| 0.3523 | 1420 | 2.3221 | - |
| 0.3548 | 1430 | 2.3357 | - |
| 0.3572 | 1440 | 2.2943 | - |
| 0.3597 | 1450 | 2.0894 | - |
| 0.3622 | 1460 | 2.2957 | - |
| 0.3647 | 1470 | 2.1793 | - |
| 0.3672 | 1480 | 2.2257 | - |
| 0.3696 | 1490 | 2.2414 | - |
| 0.3721 | 1500 | 2.1285 | - |
| 0.3746 | 1510 | 2.4221 | - |
| 0.3771 | 1520 | 2.2476 | - |
| 0.3796 | 1530 | 2.1072 | - |
| 0.3820 | 1540 | 2.2527 | - |
| 0.3845 | 1550 | 2.3188 | - |
| 0.3870 | 1560 | 2.2599 | - |
| 0.3895 | 1570 | 2.2309 | - |
| 0.3920 | 1580 | 2.2227 | - |
| 0.3944 | 1590 | 2.2546 | - |
| 0.3969 | 1600 | 2.1462 | - |
| 0.3994 | 1610 | 2.12 | - |
| 0.4019 | 1620 | 2.233 | - |
| 0.4044 | 1630 | 2.205 | - |
| 0.4068 | 1640 | 2.2024 | - |
| 0.4093 | 1650 | 2.2486 | - |
| 0.4118 | 1660 | 2.289 | - |
| 0.4143 | 1670 | 2.3016 | - |
| 0.4168 | 1680 | 2.063 | - |
| 0.4193 | 1690 | 2.1364 | - |
| 0.4217 | 1700 | 2.2191 | - |
| 0.4242 | 1710 | 2.1718 | - |
| 0.4267 | 1720 | 2.1524 | - |
| 0.4292 | 1730 | 2.2658 | - |
| 0.4317 | 1740 | 2.2978 | - |
| 0.4341 | 1750 | 2.1527 | - |
| 0.4366 | 1760 | 2.2312 | - |
| 0.4391 | 1770 | 2.2462 | - |
| 0.4416 | 1780 | 2.2673 | - |
| 0.4441 | 1790 | 2.2392 | - |
| 0.4465 | 1800 | 2.1426 | - |
| 0.4490 | 1810 | 2.3702 | - |
| 0.4515 | 1820 | 2.3869 | - |
| 0.4540 | 1830 | 2.2688 | - |
| 0.4565 | 1840 | 2.1012 | - |
| 0.4589 | 1850 | 2.1748 | - |
| 0.4614 | 1860 | 2.2232 | - |
| 0.4639 | 1870 | 2.1726 | - |
| 0.4664 | 1880 | 2.2097 | - |
| 0.4689 | 1890 | 2.2102 | - |
| 0.4713 | 1900 | 2.3145 | - |
| 0.4738 | 1910 | 2.1053 | - |
| 0.4763 | 1920 | 2.1154 | - |
| 0.4788 | 1930 | 2.1107 | - |
| 0.4813 | 1940 | 2.1472 | - |
| 0.4838 | 1950 | 2.1771 | - |
| 0.4862 | 1960 | 2.0639 | - |
| 0.4887 | 1970 | 2.0658 | - |
| 0.4912 | 1980 | 2.2208 | - |
| 0.4937 | 1990 | 2.21 | - |
| 0.4962 | 2000 | 2.2042 | 1.8790 |
| 0.4986 | 2010 | 2.1517 | - |
| 0.5011 | 2020 | 2.1699 | - |
| 0.5036 | 2030 | 2.1208 | - |
| 0.5061 | 2040 | 2.043 | - |
| 0.5086 | 2050 | 2.0806 | - |
| 0.5110 | 2060 | 2.1554 | - |
| 0.5135 | 2070 | 2.1162 | - |
| 0.5160 | 2080 | 2.0013 | - |
| 0.5185 | 2090 | 2.0849 | - |
| 0.5210 | 2100 | 2.2321 | - |
| 0.5234 | 2110 | 2.2313 | - |
| 0.5259 | 2120 | 2.0902 | - |
| 0.5284 | 2130 | 2.1391 | - |
| 0.5309 | 2140 | 2.0864 | - |
| 0.5334 | 2150 | 2.1168 | - |
| 0.5358 | 2160 | 2.1015 | - |
| 0.5383 | 2170 | 2.1222 | - |
| 0.5408 | 2180 | 2.2427 | - |
| 0.5433 | 2190 | 2.1443 | - |
| 0.5458 | 2200 | 2.1604 | - |
| 0.5483 | 2210 | 2.0717 | - |
| 0.5507 | 2220 | 2.2068 | - |
| 0.5532 | 2230 | 2.0467 | - |
| 0.5557 | 2240 | 2.121 | - |
| 0.5582 | 2250 | 2.1791 | - |
| 0.5607 | 2260 | 2.0827 | - |
| 0.5631 | 2270 | 2.1643 | - |
| 0.5656 | 2280 | 2.2075 | - |
| 0.5681 | 2290 | 2.1106 | - |
| 0.5706 | 2300 | 2.1194 | - |
| 0.5731 | 2310 | 2.2137 | - |
| 0.5755 | 2320 | 2.0811 | - |
| 0.5780 | 2330 | 2.1033 | - |
| 0.5805 | 2340 | 1.9524 | - |
| 0.5830 | 2350 | 2.1022 | - |
| 0.5855 | 2360 | 2.127 | - |
| 0.5879 | 2370 | 2.1746 | - |
| 0.5904 | 2380 | 2.1557 | - |
| 0.5929 | 2390 | 2.1646 | - |
| 0.5954 | 2400 | 2.0664 | - |
| 0.5979 | 2410 | 2.1212 | - |
| 0.6003 | 2420 | 2.173 | - |
| 0.6028 | 2430 | 2.102 | - |
| 0.6053 | 2440 | 2.0702 | - |
| 0.6078 | 2450 | 1.9177 | - |
| 0.6103 | 2460 | 2.163 | - |
| 0.6128 | 2470 | 2.0541 | - |
| 0.6152 | 2480 | 2.1842 | - |
| 0.6177 | 2490 | 2.1937 | - |
| 0.6202 | 2500 | 2.143 | - |
| 0.6227 | 2510 | 2.1004 | - |
| 0.6252 | 2520 | 2.1145 | - |
| 0.6276 | 2530 | 2.0726 | - |
| 0.6301 | 2540 | 2.065 | - |
| 0.6326 | 2550 | 2.1342 | - |
| 0.6351 | 2560 | 2.0643 | - |
| 0.6376 | 2570 | 2.0675 | - |
| 0.6400 | 2580 | 2.0014 | - |
| 0.6425 | 2590 | 2.1966 | - |
| 0.6450 | 2600 | 2.1159 | - |
| 0.6475 | 2610 | 2.0157 | - |
| 0.6500 | 2620 | 2.0803 | - |
| 0.6524 | 2630 | 2.0227 | - |
| 0.6549 | 2640 | 2.0492 | - |
| 0.6574 | 2650 | 2.1155 | - |
| 0.6599 | 2660 | 2.0301 | - |
| 0.6624 | 2670 | 2.1791 | - |
| 0.6648 | 2680 | 2.2047 | - |
| 0.6673 | 2690 | 1.995 | - |
| 0.6698 | 2700 | 1.9908 | - |
| 0.6723 | 2710 | 2.0663 | - |
| 0.6748 | 2720 | 2.1336 | - |
| 0.6773 | 2730 | 1.9984 | - |
| 0.6797 | 2740 | 2.0234 | - |
| 0.6822 | 2750 | 2.0607 | - |
| 0.6847 | 2760 | 2.0391 | - |
| 0.6872 | 2770 | 2.2076 | - |
| 0.6897 | 2780 | 2.0322 | - |
| 0.6921 | 2790 | 2.0302 | - |
| 0.6946 | 2800 | 1.9063 | - |
| 0.6971 | 2810 | 1.9939 | - |
| 0.6996 | 2820 | 2.2912 | - |
| 0.7021 | 2830 | 2.0652 | - |
| 0.7045 | 2840 | 2.1049 | - |
| 0.7070 | 2850 | 1.9113 | - |
| 0.7095 | 2860 | 2.0191 | - |
| 0.7120 | 2870 | 2.0719 | - |
| 0.7145 | 2880 | 1.9679 | - |
| 0.7169 | 2890 | 1.9377 | - |
| 0.7194 | 2900 | 2.0376 | - |
| 0.7219 | 2910 | 2.0183 | - |
| 0.7244 | 2920 | 2.0292 | - |
| 0.7269 | 2930 | 2.0002 | - |
| 0.7293 | 2940 | 1.9756 | - |
| 0.7318 | 2950 | 1.9684 | - |
| 0.7343 | 2960 | 2.0488 | - |
| 0.7368 | 2970 | 1.9472 | - |
| 0.7393 | 2980 | 2.0093 | - |
| 0.7418 | 2990 | 2.0945 | - |
| **0.7442** | **3000** | **2.06** | **1.8518** |
| 0.7467 | 3010 | 2.1229 | - |
| 0.7492 | 3020 | 2.0158 | - |
| 0.7517 | 3030 | 2.0899 | - |
| 0.7542 | 3040 | 2.0648 | - |
| 0.7566 | 3050 | 1.9429 | - |
| 0.7591 | 3060 | 2.1461 | - |
| 0.7616 | 3070 | 1.9435 | - |
| 0.7641 | 3080 | 2.0605 | - |
| 0.7666 | 3090 | 2.0657 | - |
| 0.7690 | 3100 | 2.1311 | - |
| 0.7715 | 3110 | 2.0691 | - |
| 0.7740 | 3120 | 1.9691 | - |
| 0.7765 | 3130 | 2.0362 | - |
| 0.7790 | 3140 | 2.0247 | - |
| 0.7814 | 3150 | 2.1573 | - |
| 0.7839 | 3160 | 2.0435 | - |
| 0.7864 | 3170 | 2.0407 | - |
| 0.7889 | 3180 | 2.0048 | - |
| 0.7914 | 3190 | 1.9889 | - |
| 0.7938 | 3200 | 2.1159 | - |
| 0.7963 | 3210 | 1.8981 | - |
| 0.7988 | 3220 | 1.8512 | - |
| 0.8013 | 3230 | 1.9925 | - |
| 0.8038 | 3240 | 2.0142 | - |
| 0.8063 | 3250 | 1.9632 | - |
| 0.8087 | 3260 | 2.0138 | - |
| 0.8112 | 3270 | 2.0144 | - |
| 0.8137 | 3280 | 2.097 | - |
| 0.8162 | 3290 | 2.0671 | - |
| 0.8187 | 3300 | 2.105 | - |
| 0.8211 | 3310 | 2.1392 | - |
| 0.8236 | 3320 | 2.0254 | - |
| 0.8261 | 3330 | 2.0963 | - |
| 0.8286 | 3340 | 2.0252 | - |
| 0.8311 | 3350 | 2.2256 | - |
| 0.8335 | 3360 | 1.9461 | - |
| 0.8360 | 3370 | 2.0253 | - |
| 0.8385 | 3380 | 1.9796 | - |
| 0.8410 | 3390 | 2.0018 | - |
| 0.8435 | 3400 | 2.0701 | - |
| 0.8459 | 3410 | 2.052 | - |
| 0.8484 | 3420 | 1.9837 | - |
| 0.8509 | 3430 | 1.9627 | - |
| 0.8534 | 3440 | 1.921 | - |
| 0.8559 | 3450 | 1.9698 | - |
| 0.8583 | 3460 | 2.0254 | - |
| 0.8608 | 3470 | 1.9404 | - |
| 0.8633 | 3480 | 1.9509 | - |
| 0.8658 | 3490 | 2.0727 | - |
| 0.8683 | 3500 | 1.844 | - |
| 0.8708 | 3510 | 1.9206 | - |
| 0.8732 | 3520 | 2.0281 | - |
| 0.8757 | 3530 | 1.9659 | - |
| 0.8782 | 3540 | 2.023 | - |
| 0.8807 | 3550 | 2.0457 | - |
| 0.8832 | 3560 | 2.0822 | - |
| 0.8856 | 3570 | 2.0736 | - |
| 0.8881 | 3580 | 2.0323 | - |
| 0.8906 | 3590 | 1.9307 | - |
| 0.8931 | 3600 | 2.0086 | - |
| 0.8956 | 3610 | 2.0197 | - |
| 0.8980 | 3620 | 1.8615 | - |
| 0.9005 | 3630 | 1.8747 | - |
| 0.9030 | 3640 | 2.0277 | - |
| 0.9055 | 3650 | 2.0774 | - |
| 0.9080 | 3660 | 1.9351 | - |
| 0.9104 | 3670 | 2.0159 | - |
| 0.9129 | 3680 | 2.0375 | - |
| 0.9154 | 3690 | 1.9994 | - |
| 0.9179 | 3700 | 1.9926 | - |
| 0.9204 | 3710 | 1.8202 | - |
| 0.9228 | 3720 | 1.9775 | - |
| 0.9253 | 3730 | 2.0521 | - |
| 0.9278 | 3740 | 1.9616 | - |
| 0.9303 | 3750 | 2.0131 | - |
| 0.9328 | 3760 | 2.0278 | - |
| 0.9353 | 3770 | 1.8954 | - |
| 0.9377 | 3780 | 2.0879 | - |
| 0.9402 | 3790 | 1.995 | - |
| 0.9427 | 3800 | 1.9958 | - |
| 0.9452 | 3810 | 1.9921 | - |
| 0.9477 | 3820 | 1.964 | - |
| 0.9501 | 3830 | 2.0655 | - |
| 0.9526 | 3840 | 2.0815 | - |
| 0.9551 | 3850 | 2.034 | - |
| 0.9576 | 3860 | 1.9623 | - |
| 0.9601 | 3870 | 1.9913 | - |
| 0.9625 | 3880 | 1.8262 | - |
| 0.9650 | 3890 | 2.0192 | - |
| 0.9675 | 3900 | 1.9874 | - |
| 0.9700 | 3910 | 2.0218 | - |
| 0.9725 | 3920 | 1.9251 | - |
| 0.9749 | 3930 | 1.9167 | - |
| 0.9774 | 3940 | 1.9559 | - |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.1.0+cu118
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
siradisi22/whisper-base-tr | siradisi22 | 2025-04-25T03:18:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"tr",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-04-24T21:07:24Z | ---
library_name: transformers
language:
- tr
license: apache-2.0
base_model: openai/whisper-base
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Base TR -Fast - Volkan ASLAN
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: tr
split: test
args: 'config: tr, split: test'
metrics:
- name: Wer
type: wer
value: 32.31201566890914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base TR -Fast - Volkan ASLAN
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3818
- Wer: 32.3120
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3682 | 0.3447 | 1000 | 0.4452 | 36.6618 |
| 0.3152 | 0.6894 | 2000 | 0.4094 | 34.3439 |
| 0.2074 | 1.0341 | 3000 | 0.3889 | 32.5232 |
| 0.2044 | 1.3788 | 4000 | 0.3818 | 32.3120 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
ArtemisTAO/lam16 | ArtemisTAO | 2025-04-25T03:18:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:17:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_snorting_grouse | 0xshaf | 2025-04-25T03:17:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am arctic snorting grouse",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-24T19:34:36Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_snorting_grouse
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am arctic snorting grouse
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_snorting_grouse
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-arctic_snorting_grouse", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
3mily1u/fim-codegen-350m-mono-finetuned-attack-25 | 3mily1u | 2025-04-25T03:17:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:16:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OpenGVLab/InternVL3-38B-Pretrained | OpenGVLab | 2025-04-25T03:16:25Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"arxiv:2412.09616",
"base_model:OpenGVLab/InternViT-6B-448px-V2_5",
"base_model:merge:OpenGVLab/InternViT-6B-448px-V2_5",
"base_model:Qwen/Qwen2.5-32B",
"base_model:merge:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"region:us"
]
| image-text-to-text | 2025-04-17T08:03:47Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternViT-6B-448px-V2_5
- Qwen/Qwen2.5-32B
base_model_relation: merge
language:
- multilingual
tags:
- internvl
- custom_code
---
# InternVL3-38B-Pretrained
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[📜 InternVL3\]](https://huggingface.co/papers/2504.10479)
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
<div align="center">
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
</div>
## Introduction
***This is the pretrained version of InternVL3-38B, which has undergone native multimodal pre-trainin but has not undergone post-training (i.e., SFT and MPO). If you're unsure which version to use, please use the [InternVL3-38B](https://huggingface.co/OpenGVLab/InternVL3-38B) version.***
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.

## InternVL3 Family
In the following table, we provide an overview of the InternVL3 series.
| Model Name | Vision Part | Language Part | HF Link |
| :-----------: | :-------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :------------------------------------------------------: |
| InternVL3-1B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-1B) |
| InternVL3-2B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-2B) |
| InternVL3-8B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-8B) |
| InternVL3-9B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-9B) |
| InternVL3-14B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-14B) |
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |

## Model Architecture
As shown in the following figure, [InternVL3](https://internvl.github.io/blog/2025-04-11-InternVL-3/) retains the same model architecture as [InternVL 2.5](https://internvl.github.io/blog/2024-12-05-InternVL-2.5/) and its predecessors, InternVL 1.5 and 2.0, following the "ViT-MLP-LLM" paradigm. In this new version, we integrate a newly incrementally pre-trained InternViT with various pre-trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.

As in the previous version, we applied a pixel unshuffle operation, reducing the number of visual tokens to one-quarter of the original. Besides, we adopted a similar dynamic resolution strategy as InternVL 1.5, dividing images into tiles of 448×448 pixels. The key difference, starting from InternVL 2.0, is that we additionally introduced support for multi-image and video data.
Notably, in InternVL3, we integrate the [Variable Visual Position Encoding (V2PE)](https://arxiv.org/abs/2412.09616), which utilizes smaller, more flexible position increments for visual tokens. Benefiting from V2PE, InternVL3 exhibits better long context understanding capabilities compared to its predecessors.
## Training Strategy
### Native Multimodal Pre-Training
We propose a [Native Multimodal Pre-Training](https://huggingface.co/papers/2504.10479) approach that consolidates language and vision learning into a single pre-training stage.
In contrast to standard paradigms that first train a language-only model and subsequently adapt it to handle additional modalities, our method interleaves multimodal data (e.g., image-text, video-text, or image-text interleaved sequences) with large-scale textual corpora. This unified training scheme allows the model to learn both linguistic and multimodal representations simultaneously, ultimately enhancing its capability to handle vision-language tasks without the need for separate alignment or bridging modules.
Please see [our paper](https://huggingface.co/papers/2504.10479) for more details.
### Supervised Fine-Tuning
In this phase, the techniques of random JPEG compression, square loss re-weighting, and multimodal data packing proposed in [InternVL2.5](https://arxiv.org/abs/2412.05271) are also employed in the InternVL3 series.
The main advancement of the SFT phase in InternVL3 compared to InternVL2.5 lies in the use of higher-quality and more diverse training data.
Specifically, we further extend training samples for tool use, 3D scene understanding, GUI operations, long context tasks, video understanding, scientific diagrams, creative writing, and multimodal reasoning.
### Mixed Preference Optimization
During Pre-training and SFT, the model is trained to predict the next token conditioned on previous ground-truth tokens.
However, during inference, the model predicts each token based on its own prior outputs.
This discrepancy between ground-truth tokens and model-predicted tokens introduces a distribution shift, which can impair the model’s Chain-of-Thought (CoT) reasoning capabilities.
To mitigate this issue, we employ [MPO](https://arxiv.org/abs/2411.10442), which introduces additional supervision from both positive and negative samples to align the model response distribution with the ground-truth distribution, thereby improving reasoning performance.
Specifically, the training objective of MPO is a combination of
preference loss \\(\mathcal{L}_{\text{p}}\\),
quality loss \\(\mathcal{L}_{\text{q}}\\),
and generation loss \\(\mathcal{L}_{\text{g}}\\),
which can be formulated as follows:
$$
\mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
$$
where \\(w_{*}\\) represents the weight assigned to each loss component. Please see [our paper](https://arxiv.org/abs/2411.10442) for more details about MPO.
### Test-Time Scaling
Test-Time Scaling has been shown to be an effective method to enhance the reasoning abilities of LLMs and MLLMs.
In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B](https://huggingface.co/OpenGVLab/VisualPRM-8B) as the critic model to select the best response for reasoning and mathematics evaluation.
## Evaluation on Multimodal Capability
### Multimodal Reasoning and Mathematics

### OCR, Chart, and Document Understanding

### Multi-Image & Real-World Comprehension

### Comprehensive Multimodal & Hallucination Evaluation

### Visual Grounding

### Multimodal Multilingual Understanding

### Video Understanding

### GUI Grounding

### Spatial Reasoning

## Evaluation on Language Capability
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3.
Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.

## Ablation Study
### Native Multimodal Pre-Training
We conduct experiments on the InternVL2-8B model while keeping its architecture, initialization parameters, and training data entirely unchanged. Traditionally, InternVL2-8B employs a training pipeline that begins with an MLP warmup phase for feature alignment followed by an Instruction Tuning stage. In our experiments, we substitute the conventional MLP warmup phase with a native multimodal pre-training process. This modification isolates the contribution of native multimodal pre-training to the overall multimodal capability of the model.
The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.

### Mixed Preference Optimization
As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.

### Variable Visual Position Encoding
As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.

## Quick Start
We provide an example code to run `InternVL3-38B` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
### Model Loading
#### 16-bit (bf16 / fp16)
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-38B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
```
#### BNB 8-bit Quantization
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-38B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=True,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval()
```
#### Multiple GPUs
The reason for writing the code this way is to avoid errors that occur during multi-GPU inference due to tensors not being on the same device. By ensuring that the first and last layers of the large language model (LLM) are on the same device, we prevent such errors.
```python
import math
import torch
from transformers import AutoTokenizer, AutoModel
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
path = "OpenGVLab/InternVL3-38B"
device_map = split_model('InternVL3-38B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
```
### Inference with Transformers
```python
import math
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
# If you set `load_in_8bit=True`, you will need two 80GB GPUs.
# If you set `load_in_8bit=False`, you will need at least three 80GB GPUs.
path = 'OpenGVLab/InternVL3-38B'
device_map = split_model('InternVL3-38B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=False,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation (纯文本对话)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (单图单轮对话)
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (单图多轮对话)
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (多图多轮对话,拼接图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '<image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (单图批处理)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (视频多轮对话)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
```
#### Streaming Output
Besides this method, you can also use the following code to get streamed output.
```python
from transformers import TextIteratorStreamer
from threading import Thread
# Initialize the streamer
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
# Define the generation configuration
generation_config = dict(max_new_tokens=1024, do_sample=False, streamer=streamer)
# Start the model chat in a separate thread
thread = Thread(target=model.chat, kwargs=dict(
tokenizer=tokenizer, pixel_values=pixel_values, question=question,
history=None, return_history=False, generation_config=generation_config,
))
thread.start()
# Initialize an empty string to store the generated text
generated_text = ''
# Loop through the streamer to get the new text as it is generated
for new_text in streamer:
if new_text == model.conv_template.sep:
break
generated_text += new_text
print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
```
## Finetune
Many repositories now support fine-tuning of the InternVL series models, including [InternVL](https://github.com/OpenGVLab/InternVL), [SWIFT](https://github.com/modelscope/ms-swift), [XTurner](https://github.com/InternLM/xtuner), and others. Please refer to their documentation for more details on fine-tuning.
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
```sh
# if lmdeploy<0.7.3, you need to explicitly set chat_template_config=ChatTemplateConfig(model_name='internvl2_5')
pip install lmdeploy>=0.7.3
```
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
#### A 'Hello, world' Example
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-38B'
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=2), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
response = pipe(('describe this image', image))
print(response.text)
```
If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
#### Multi-images Inference
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
from lmdeploy.vl.constants import IMAGE_TOKEN
model = 'OpenGVLab/InternVL3-38B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=2), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
]
images = [load_image(img_url) for img_url in image_urls]
# Numbering images improves multi-image conversations
response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
print(response.text)
```
#### Batch Prompts Inference
Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-38B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=2), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
]
prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
response = pipe(prompts)
print(response)
```
#### Multi-turn Conversation
There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-38B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=2), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
sess = pipe.chat(('describe this image', image), gen_config=gen_config)
print(sess.response.text)
sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
print(sess.response.text)
```
#### Service
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
```shell
lmdeploy serve api_server OpenGVLab/InternVL3-38B --chat-template internvl2_5 --server-port 23333 --tp 2
```
To use the OpenAI-style interface, you need to install OpenAI:
```shell
pip install openai
```
Then, use the code below to make the API call:
```python
from openai import OpenAI
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=[{
'role':
'user',
'content': [{
'type': 'text',
'text': 'describe this image',
}, {
'type': 'image_url',
'image_url': {
'url':
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
},
}],
}],
temperature=0.8,
top_p=0.8)
print(response)
```
## License
This project is released under the MIT License. This project uses the pre-trained Qwen2.5 as a component, which is licensed under the Apache-2.0 License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{wang2024mpo,
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2411.10442},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}
``` |
OpenGVLab/InternVL3-9B-Pretrained | OpenGVLab | 2025-04-25T03:16:00Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"arxiv:2412.09616",
"base_model:OpenGVLab/InternViT-300M-448px-V2_5",
"base_model:merge:OpenGVLab/InternViT-300M-448px-V2_5",
"base_model:internlm/internlm3-8b-instruct",
"base_model:merge:internlm/internlm3-8b-instruct",
"license:mit",
"region:us"
]
| image-text-to-text | 2025-04-17T07:46:04Z | ---
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternViT-300M-448px-V2_5
- internlm/internlm3-8b-instruct
base_model_relation: merge
language:
- multilingual
tags:
- internvl
- custom_code
---
# InternVL3-9B-Pretrained
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[📜 InternVL3\]](https://huggingface.co/papers/2504.10479)
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
<div align="center">
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
</div>
## Introduction
***This is the pretrained version of InternVL3-9B, which has undergone native multimodal pre-trainin but has not undergone post-training (i.e., SFT and MPO). If you're unsure which version to use, please use the [InternVL3-9B](https://huggingface.co/OpenGVLab/InternVL3-9B) version.***
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.

## InternVL3 Family
In the following table, we provide an overview of the InternVL3 series.
| Model Name | Vision Part | Language Part | HF Link |
| :-----------: | :-------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :------------------------------------------------------: |
| InternVL3-1B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-1B) |
| InternVL3-2B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-2B) |
| InternVL3-8B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-8B) |
| InternVL3-9B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-9B) |
| InternVL3-14B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-14B) |
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |

## Model Architecture
As shown in the following figure, [InternVL3](https://internvl.github.io/blog/2025-04-11-InternVL-3/) retains the same model architecture as [InternVL 2.5](https://internvl.github.io/blog/2024-12-05-InternVL-2.5/) and its predecessors, InternVL 1.5 and 2.0, following the "ViT-MLP-LLM" paradigm. In this new version, we integrate a newly incrementally pre-trained InternViT with various pre-trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.

As in the previous version, we applied a pixel unshuffle operation, reducing the number of visual tokens to one-quarter of the original. Besides, we adopted a similar dynamic resolution strategy as InternVL 1.5, dividing images into tiles of 448×448 pixels. The key difference, starting from InternVL 2.0, is that we additionally introduced support for multi-image and video data.
Notably, in InternVL3, we integrate the [Variable Visual Position Encoding (V2PE)](https://arxiv.org/abs/2412.09616), which utilizes smaller, more flexible position increments for visual tokens. Benefiting from V2PE, InternVL3 exhibits better long context understanding capabilities compared to its predecessors.
## Training Strategy
### Native Multimodal Pre-Training
We propose a [Native Multimodal Pre-Training](https://huggingface.co/papers/2504.10479) approach that consolidates language and vision learning into a single pre-training stage.
In contrast to standard paradigms that first train a language-only model and subsequently adapt it to handle additional modalities, our method interleaves multimodal data (e.g., image-text, video-text, or image-text interleaved sequences) with large-scale textual corpora. This unified training scheme allows the model to learn both linguistic and multimodal representations simultaneously, ultimately enhancing its capability to handle vision-language tasks without the need for separate alignment or bridging modules.
Please see [our paper](https://huggingface.co/papers/2504.10479) for more details.
### Supervised Fine-Tuning
In this phase, the techniques of random JPEG compression, square loss re-weighting, and multimodal data packing proposed in [InternVL2.5](https://arxiv.org/abs/2412.05271) are also employed in the InternVL3 series.
The main advancement of the SFT phase in InternVL3 compared to InternVL2.5 lies in the use of higher-quality and more diverse training data.
Specifically, we further extend training samples for tool use, 3D scene understanding, GUI operations, long context tasks, video understanding, scientific diagrams, creative writing, and multimodal reasoning.
### Mixed Preference Optimization
During Pre-training and SFT, the model is trained to predict the next token conditioned on previous ground-truth tokens.
However, during inference, the model predicts each token based on its own prior outputs.
This discrepancy between ground-truth tokens and model-predicted tokens introduces a distribution shift, which can impair the model’s Chain-of-Thought (CoT) reasoning capabilities.
To mitigate this issue, we employ [MPO](https://arxiv.org/abs/2411.10442), which introduces additional supervision from both positive and negative samples to align the model response distribution with the ground-truth distribution, thereby improving reasoning performance.
Specifically, the training objective of MPO is a combination of
preference loss \\(\mathcal{L}_{\text{p}}\\),
quality loss \\(\mathcal{L}_{\text{q}}\\),
and generation loss \\(\mathcal{L}_{\text{g}}\\),
which can be formulated as follows:
$$
\mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
$$
where \\(w_{*}\\) represents the weight assigned to each loss component. Please see [our paper](https://arxiv.org/abs/2411.10442) for more details about MPO.
### Test-Time Scaling
Test-Time Scaling has been shown to be an effective method to enhance the reasoning abilities of LLMs and MLLMs.
In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B](https://huggingface.co/OpenGVLab/VisualPRM-8B) as the critic model to select the best response for reasoning and mathematics evaluation.
## Evaluation on Multimodal Capability
### Multimodal Reasoning and Mathematics

### OCR, Chart, and Document Understanding

### Multi-Image & Real-World Comprehension

### Comprehensive Multimodal & Hallucination Evaluation

### Visual Grounding

### Multimodal Multilingual Understanding

### Video Understanding

### GUI Grounding

### Spatial Reasoning

## Evaluation on Language Capability
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3.
Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.

## Ablation Study
### Native Multimodal Pre-Training
We conduct experiments on the InternVL2-8B model while keeping its architecture, initialization parameters, and training data entirely unchanged. Traditionally, InternVL2-8B employs a training pipeline that begins with an MLP warmup phase for feature alignment followed by an Instruction Tuning stage. In our experiments, we substitute the conventional MLP warmup phase with a native multimodal pre-training process. This modification isolates the contribution of native multimodal pre-training to the overall multimodal capability of the model.
The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.

### Mixed Preference Optimization
As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.

### Variable Visual Position Encoding
As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.

## Quick Start
We provide an example code to run `InternVL3-9B` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
### Model Loading
#### 16-bit (bf16 / fp16)
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-9B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
```
#### BNB 8-bit Quantization
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-9B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=True,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval()
```
#### Multiple GPUs
The reason for writing the code this way is to avoid errors that occur during multi-GPU inference due to tensors not being on the same device. By ensuring that the first and last layers of the large language model (LLM) are on the same device, we prevent such errors.
```python
import math
import torch
from transformers import AutoTokenizer, AutoModel
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
path = "OpenGVLab/InternVL3-9B"
device_map = split_model('InternVL3-9B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
```
### Inference with Transformers
```python
import math
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
# If you set `load_in_8bit=True`, you will need two 80GB GPUs.
# If you set `load_in_8bit=False`, you will need at least three 80GB GPUs.
path = 'OpenGVLab/InternVL3-9B'
device_map = split_model('InternVL3-9B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=False,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation (纯文本对话)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (单图单轮对话)
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (单图多轮对话)
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (多图多轮对话,拼接图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '<image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (单图批处理)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (视频多轮对话)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
```
#### Streaming Output
Besides this method, you can also use the following code to get streamed output.
```python
from transformers import TextIteratorStreamer
from threading import Thread
# Initialize the streamer
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
# Define the generation configuration
generation_config = dict(max_new_tokens=1024, do_sample=False, streamer=streamer)
# Start the model chat in a separate thread
thread = Thread(target=model.chat, kwargs=dict(
tokenizer=tokenizer, pixel_values=pixel_values, question=question,
history=None, return_history=False, generation_config=generation_config,
))
thread.start()
# Initialize an empty string to store the generated text
generated_text = ''
# Loop through the streamer to get the new text as it is generated
for new_text in streamer:
if new_text == model.conv_template.sep:
break
generated_text += new_text
print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
```
## Finetune
Many repositories now support fine-tuning of the InternVL series models, including [InternVL](https://github.com/OpenGVLab/InternVL), [SWIFT](https://github.com/modelscope/ms-swift), [XTurner](https://github.com/InternLM/xtuner), and others. Please refer to their documentation for more details on fine-tuning.
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
```sh
# if lmdeploy<0.7.3, you need to explicitly set chat_template_config=ChatTemplateConfig(model_name='internvl2_5')
pip install lmdeploy>=0.7.3
```
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
#### A 'Hello, world' Example
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-9B'
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
response = pipe(('describe this image', image))
print(response.text)
```
If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
#### Multi-images Inference
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
from lmdeploy.vl.constants import IMAGE_TOKEN
model = 'OpenGVLab/InternVL3-9B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
]
images = [load_image(img_url) for img_url in image_urls]
# Numbering images improves multi-image conversations
response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
print(response.text)
```
#### Batch Prompts Inference
Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-9B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
]
prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
response = pipe(prompts)
print(response)
```
#### Multi-turn Conversation
There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-9B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
sess = pipe.chat(('describe this image', image), gen_config=gen_config)
print(sess.response.text)
sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
print(sess.response.text)
```
#### Service
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
```shell
lmdeploy serve api_server OpenGVLab/InternVL3-9B --chat-template internvl2_5 --server-port 23333 --tp 1
```
To use the OpenAI-style interface, you need to install OpenAI:
```shell
pip install openai
```
Then, use the code below to make the API call:
```python
from openai import OpenAI
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=[{
'role':
'user',
'content': [{
'type': 'text',
'text': 'describe this image',
}, {
'type': 'image_url',
'image_url': {
'url':
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
},
}],
}],
temperature=0.8,
top_p=0.8)
print(response)
```
## License
This project is released under the MIT License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{wang2024mpo,
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2411.10442},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}
``` |
OpenGVLab/InternVL3-8B-Pretrained | OpenGVLab | 2025-04-25T03:15:47Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"arxiv:2412.09616",
"base_model:OpenGVLab/InternViT-300M-448px-V2_5",
"base_model:merge:OpenGVLab/InternViT-300M-448px-V2_5",
"base_model:Qwen/Qwen2.5-7B",
"base_model:merge:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
]
| image-text-to-text | 2025-04-17T07:45:52Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternViT-300M-448px-V2_5
- Qwen/Qwen2.5-7B
base_model_relation: merge
language:
- multilingual
tags:
- internvl
- custom_code
---
# InternVL3-8B-Pretrained
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[📜 InternVL3\]](https://huggingface.co/papers/2504.10479)
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
<div align="center">
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
</div>
## Introduction
***This is the pretrained version of InternVL3-8B, which has undergone native multimodal pre-trainin but has not undergone post-training (i.e., SFT and MPO). If you're unsure which version to use, please use the [InternVL3-8B](https://huggingface.co/OpenGVLab/InternVL3-8B) version.***
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.

## InternVL3 Family
In the following table, we provide an overview of the InternVL3 series.
| Model Name | Vision Part | Language Part | HF Link |
| :-----------: | :-------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :------------------------------------------------------: |
| InternVL3-1B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-1B) |
| InternVL3-2B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-2B) |
| InternVL3-8B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-8B) |
| InternVL3-9B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-9B) |
| InternVL3-14B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-14B) |
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |

## Model Architecture
As shown in the following figure, [InternVL3](https://internvl.github.io/blog/2025-04-11-InternVL-3/) retains the same model architecture as [InternVL 2.5](https://internvl.github.io/blog/2024-12-05-InternVL-2.5/) and its predecessors, InternVL 1.5 and 2.0, following the "ViT-MLP-LLM" paradigm. In this new version, we integrate a newly incrementally pre-trained InternViT with various pre-trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.

As in the previous version, we applied a pixel unshuffle operation, reducing the number of visual tokens to one-quarter of the original. Besides, we adopted a similar dynamic resolution strategy as InternVL 1.5, dividing images into tiles of 448×448 pixels. The key difference, starting from InternVL 2.0, is that we additionally introduced support for multi-image and video data.
Notably, in InternVL3, we integrate the [Variable Visual Position Encoding (V2PE)](https://arxiv.org/abs/2412.09616), which utilizes smaller, more flexible position increments for visual tokens. Benefiting from V2PE, InternVL3 exhibits better long context understanding capabilities compared to its predecessors.
## Training Strategy
### Native Multimodal Pre-Training
We propose a [Native Multimodal Pre-Training](https://huggingface.co/papers/2504.10479) approach that consolidates language and vision learning into a single pre-training stage.
In contrast to standard paradigms that first train a language-only model and subsequently adapt it to handle additional modalities, our method interleaves multimodal data (e.g., image-text, video-text, or image-text interleaved sequences) with large-scale textual corpora. This unified training scheme allows the model to learn both linguistic and multimodal representations simultaneously, ultimately enhancing its capability to handle vision-language tasks without the need for separate alignment or bridging modules.
Please see [our paper](https://huggingface.co/papers/2504.10479) for more details.
### Supervised Fine-Tuning
In this phase, the techniques of random JPEG compression, square loss re-weighting, and multimodal data packing proposed in [InternVL2.5](https://arxiv.org/abs/2412.05271) are also employed in the InternVL3 series.
The main advancement of the SFT phase in InternVL3 compared to InternVL2.5 lies in the use of higher-quality and more diverse training data.
Specifically, we further extend training samples for tool use, 3D scene understanding, GUI operations, long context tasks, video understanding, scientific diagrams, creative writing, and multimodal reasoning.
### Mixed Preference Optimization
During Pre-training and SFT, the model is trained to predict the next token conditioned on previous ground-truth tokens.
However, during inference, the model predicts each token based on its own prior outputs.
This discrepancy between ground-truth tokens and model-predicted tokens introduces a distribution shift, which can impair the model’s Chain-of-Thought (CoT) reasoning capabilities.
To mitigate this issue, we employ [MPO](https://arxiv.org/abs/2411.10442), which introduces additional supervision from both positive and negative samples to align the model response distribution with the ground-truth distribution, thereby improving reasoning performance.
Specifically, the training objective of MPO is a combination of
preference loss \\(\mathcal{L}_{\text{p}}\\),
quality loss \\(\mathcal{L}_{\text{q}}\\),
and generation loss \\(\mathcal{L}_{\text{g}}\\),
which can be formulated as follows:
$$
\mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
$$
where \\(w_{*}\\) represents the weight assigned to each loss component. Please see [our paper](https://arxiv.org/abs/2411.10442) for more details about MPO.
### Test-Time Scaling
Test-Time Scaling has been shown to be an effective method to enhance the reasoning abilities of LLMs and MLLMs.
In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B](https://huggingface.co/OpenGVLab/VisualPRM-8B) as the critic model to select the best response for reasoning and mathematics evaluation.
## Evaluation on Multimodal Capability
### Multimodal Reasoning and Mathematics

### OCR, Chart, and Document Understanding

### Multi-Image & Real-World Comprehension

### Comprehensive Multimodal & Hallucination Evaluation

### Visual Grounding

### Multimodal Multilingual Understanding

### Video Understanding

### GUI Grounding

### Spatial Reasoning

## Evaluation on Language Capability
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3.
Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.

## Ablation Study
### Native Multimodal Pre-Training
We conduct experiments on the InternVL2-8B model while keeping its architecture, initialization parameters, and training data entirely unchanged. Traditionally, InternVL2-8B employs a training pipeline that begins with an MLP warmup phase for feature alignment followed by an Instruction Tuning stage. In our experiments, we substitute the conventional MLP warmup phase with a native multimodal pre-training process. This modification isolates the contribution of native multimodal pre-training to the overall multimodal capability of the model.
The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.

### Mixed Preference Optimization
As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.

### Variable Visual Position Encoding
As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.

## Quick Start
We provide an example code to run `InternVL3-8B` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
### Model Loading
#### 16-bit (bf16 / fp16)
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-8B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
```
#### BNB 8-bit Quantization
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-8B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=True,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval()
```
#### Multiple GPUs
The reason for writing the code this way is to avoid errors that occur during multi-GPU inference due to tensors not being on the same device. By ensuring that the first and last layers of the large language model (LLM) are on the same device, we prevent such errors.
```python
import math
import torch
from transformers import AutoTokenizer, AutoModel
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
path = "OpenGVLab/InternVL3-8B"
device_map = split_model('InternVL3-8B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
```
### Inference with Transformers
```python
import math
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
# If you set `load_in_8bit=True`, you will need two 80GB GPUs.
# If you set `load_in_8bit=False`, you will need at least three 80GB GPUs.
path = 'OpenGVLab/InternVL3-8B'
device_map = split_model('InternVL3-8B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=False,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation (纯文本对话)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (单图单轮对话)
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (单图多轮对话)
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (多图多轮对话,拼接图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '<image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (单图批处理)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (视频多轮对话)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
```
#### Streaming Output
Besides this method, you can also use the following code to get streamed output.
```python
from transformers import TextIteratorStreamer
from threading import Thread
# Initialize the streamer
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
# Define the generation configuration
generation_config = dict(max_new_tokens=1024, do_sample=False, streamer=streamer)
# Start the model chat in a separate thread
thread = Thread(target=model.chat, kwargs=dict(
tokenizer=tokenizer, pixel_values=pixel_values, question=question,
history=None, return_history=False, generation_config=generation_config,
))
thread.start()
# Initialize an empty string to store the generated text
generated_text = ''
# Loop through the streamer to get the new text as it is generated
for new_text in streamer:
if new_text == model.conv_template.sep:
break
generated_text += new_text
print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
```
## Finetune
Many repositories now support fine-tuning of the InternVL series models, including [InternVL](https://github.com/OpenGVLab/InternVL), [SWIFT](https://github.com/modelscope/ms-swift), [XTurner](https://github.com/InternLM/xtuner), and others. Please refer to their documentation for more details on fine-tuning.
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
```sh
# if lmdeploy<0.7.3, you need to explicitly set chat_template_config=ChatTemplateConfig(model_name='internvl2_5')
pip install lmdeploy>=0.7.3
```
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
#### A 'Hello, world' Example
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-8B'
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
response = pipe(('describe this image', image))
print(response.text)
```
If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
#### Multi-images Inference
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
from lmdeploy.vl.constants import IMAGE_TOKEN
model = 'OpenGVLab/InternVL3-8B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
]
images = [load_image(img_url) for img_url in image_urls]
# Numbering images improves multi-image conversations
response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
print(response.text)
```
#### Batch Prompts Inference
Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-8B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
]
prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
response = pipe(prompts)
print(response)
```
#### Multi-turn Conversation
There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-8B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
sess = pipe.chat(('describe this image', image), gen_config=gen_config)
print(sess.response.text)
sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
print(sess.response.text)
```
#### Service
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
```shell
lmdeploy serve api_server OpenGVLab/InternVL3-8B --chat-template internvl2_5 --server-port 23333 --tp 1
```
To use the OpenAI-style interface, you need to install OpenAI:
```shell
pip install openai
```
Then, use the code below to make the API call:
```python
from openai import OpenAI
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=[{
'role':
'user',
'content': [{
'type': 'text',
'text': 'describe this image',
}, {
'type': 'image_url',
'image_url': {
'url':
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
},
}],
}],
temperature=0.8,
top_p=0.8)
print(response)
```
## License
This project is released under the MIT License. This project uses the pre-trained Qwen2.5 as a component, which is licensed under the Apache-2.0 License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{wang2024mpo,
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2411.10442},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}
``` |
OpenGVLab/InternVL3-2B-Pretrained | OpenGVLab | 2025-04-25T03:15:25Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"arxiv:2412.09616",
"base_model:OpenGVLab/InternViT-300M-448px-V2_5",
"base_model:merge:OpenGVLab/InternViT-300M-448px-V2_5",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:merge:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
]
| image-text-to-text | 2025-04-17T07:45:53Z | ---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternViT-300M-448px-V2_5
- Qwen/Qwen2.5-1.5B
base_model_relation: merge
language:
- multilingual
tags:
- internvl
- custom_code
---
# InternVL3-2B-Pretrained
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[📜 InternVL3\]](https://huggingface.co/papers/2504.10479)
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
<div align="center">
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
</div>
## Introduction
***This is the pretrained version of InternVL3-2B, which has undergone native multimodal pre-trainin but has not undergone post-training (i.e., SFT and MPO). If you're unsure which version to use, please use the [InternVL3-2B](https://huggingface.co/OpenGVLab/InternVL3-2B) version.***
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.

## InternVL3 Family
In the following table, we provide an overview of the InternVL3 series.
| Model Name | Vision Part | Language Part | HF Link |
| :-----------: | :-------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :------------------------------------------------------: |
| InternVL3-1B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-1B) |
| InternVL3-2B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-2B) |
| InternVL3-8B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-8B) |
| InternVL3-9B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-9B) |
| InternVL3-14B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-14B) |
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |

## Model Architecture
As shown in the following figure, [InternVL3](https://internvl.github.io/blog/2025-04-11-InternVL-3/) retains the same model architecture as [InternVL 2.5](https://internvl.github.io/blog/2024-12-05-InternVL-2.5/) and its predecessors, InternVL 1.5 and 2.0, following the "ViT-MLP-LLM" paradigm. In this new version, we integrate a newly incrementally pre-trained InternViT with various pre-trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.

As in the previous version, we applied a pixel unshuffle operation, reducing the number of visual tokens to one-quarter of the original. Besides, we adopted a similar dynamic resolution strategy as InternVL 1.5, dividing images into tiles of 448×448 pixels. The key difference, starting from InternVL 2.0, is that we additionally introduced support for multi-image and video data.
Notably, in InternVL3, we integrate the [Variable Visual Position Encoding (V2PE)](https://arxiv.org/abs/2412.09616), which utilizes smaller, more flexible position increments for visual tokens. Benefiting from V2PE, InternVL3 exhibits better long context understanding capabilities compared to its predecessors.
## Training Strategy
### Native Multimodal Pre-Training
We propose a [Native Multimodal Pre-Training](https://huggingface.co/papers/2504.10479) approach that consolidates language and vision learning into a single pre-training stage.
In contrast to standard paradigms that first train a language-only model and subsequently adapt it to handle additional modalities, our method interleaves multimodal data (e.g., image-text, video-text, or image-text interleaved sequences) with large-scale textual corpora. This unified training scheme allows the model to learn both linguistic and multimodal representations simultaneously, ultimately enhancing its capability to handle vision-language tasks without the need for separate alignment or bridging modules.
Please see [our paper](https://huggingface.co/papers/2504.10479) for more details.
### Supervised Fine-Tuning
In this phase, the techniques of random JPEG compression, square loss re-weighting, and multimodal data packing proposed in [InternVL2.5](https://arxiv.org/abs/2412.05271) are also employed in the InternVL3 series.
The main advancement of the SFT phase in InternVL3 compared to InternVL2.5 lies in the use of higher-quality and more diverse training data.
Specifically, we further extend training samples for tool use, 3D scene understanding, GUI operations, long context tasks, video understanding, scientific diagrams, creative writing, and multimodal reasoning.
### Mixed Preference Optimization
During Pre-training and SFT, the model is trained to predict the next token conditioned on previous ground-truth tokens.
However, during inference, the model predicts each token based on its own prior outputs.
This discrepancy between ground-truth tokens and model-predicted tokens introduces a distribution shift, which can impair the model’s Chain-of-Thought (CoT) reasoning capabilities.
To mitigate this issue, we employ [MPO](https://arxiv.org/abs/2411.10442), which introduces additional supervision from both positive and negative samples to align the model response distribution with the ground-truth distribution, thereby improving reasoning performance.
Specifically, the training objective of MPO is a combination of
preference loss \\(\mathcal{L}_{\text{p}}\\),
quality loss \\(\mathcal{L}_{\text{q}}\\),
and generation loss \\(\mathcal{L}_{\text{g}}\\),
which can be formulated as follows:
$$
\mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
$$
where \\(w_{*}\\) represents the weight assigned to each loss component. Please see [our paper](https://arxiv.org/abs/2411.10442) for more details about MPO.
### Test-Time Scaling
Test-Time Scaling has been shown to be an effective method to enhance the reasoning abilities of LLMs and MLLMs.
In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B](https://huggingface.co/OpenGVLab/VisualPRM-8B) as the critic model to select the best response for reasoning and mathematics evaluation.
## Evaluation on Multimodal Capability
### Multimodal Reasoning and Mathematics

### OCR, Chart, and Document Understanding

### Multi-Image & Real-World Comprehension

### Comprehensive Multimodal & Hallucination Evaluation

### Visual Grounding

### Multimodal Multilingual Understanding

### Video Understanding

### GUI Grounding

### Spatial Reasoning

## Evaluation on Language Capability
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3.
Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.

## Ablation Study
### Native Multimodal Pre-Training
We conduct experiments on the InternVL2-8B model while keeping its architecture, initialization parameters, and training data entirely unchanged. Traditionally, InternVL2-8B employs a training pipeline that begins with an MLP warmup phase for feature alignment followed by an Instruction Tuning stage. In our experiments, we substitute the conventional MLP warmup phase with a native multimodal pre-training process. This modification isolates the contribution of native multimodal pre-training to the overall multimodal capability of the model.
The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.

### Mixed Preference Optimization
As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.

### Variable Visual Position Encoding
As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.

## Quick Start
We provide an example code to run `InternVL3-2B` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
### Model Loading
#### 16-bit (bf16 / fp16)
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-2B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
```
#### BNB 8-bit Quantization
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-2B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=True,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval()
```
#### Multiple GPUs
The reason for writing the code this way is to avoid errors that occur during multi-GPU inference due to tensors not being on the same device. By ensuring that the first and last layers of the large language model (LLM) are on the same device, we prevent such errors.
```python
import math
import torch
from transformers import AutoTokenizer, AutoModel
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
path = "OpenGVLab/InternVL3-2B"
device_map = split_model('InternVL3-2B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
```
### Inference with Transformers
```python
import math
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
# If you set `load_in_8bit=True`, you will need two 80GB GPUs.
# If you set `load_in_8bit=False`, you will need at least three 80GB GPUs.
path = 'OpenGVLab/InternVL3-2B'
device_map = split_model('InternVL3-2B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=False,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation (纯文本对话)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (单图单轮对话)
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (单图多轮对话)
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (多图多轮对话,拼接图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '<image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (单图批处理)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (视频多轮对话)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
```
#### Streaming Output
Besides this method, you can also use the following code to get streamed output.
```python
from transformers import TextIteratorStreamer
from threading import Thread
# Initialize the streamer
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
# Define the generation configuration
generation_config = dict(max_new_tokens=1024, do_sample=False, streamer=streamer)
# Start the model chat in a separate thread
thread = Thread(target=model.chat, kwargs=dict(
tokenizer=tokenizer, pixel_values=pixel_values, question=question,
history=None, return_history=False, generation_config=generation_config,
))
thread.start()
# Initialize an empty string to store the generated text
generated_text = ''
# Loop through the streamer to get the new text as it is generated
for new_text in streamer:
if new_text == model.conv_template.sep:
break
generated_text += new_text
print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
```
## Finetune
Many repositories now support fine-tuning of the InternVL series models, including [InternVL](https://github.com/OpenGVLab/InternVL), [SWIFT](https://github.com/modelscope/ms-swift), [XTurner](https://github.com/InternLM/xtuner), and others. Please refer to their documentation for more details on fine-tuning.
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
```sh
# if lmdeploy<0.7.3, you need to explicitly set chat_template_config=ChatTemplateConfig(model_name='internvl2_5')
pip install lmdeploy>=0.7.3
```
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
#### A 'Hello, world' Example
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-2B'
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
response = pipe(('describe this image', image))
print(response.text)
```
If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
#### Multi-images Inference
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
from lmdeploy.vl.constants import IMAGE_TOKEN
model = 'OpenGVLab/InternVL3-2B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
]
images = [load_image(img_url) for img_url in image_urls]
# Numbering images improves multi-image conversations
response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
print(response.text)
```
#### Batch Prompts Inference
Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-2B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
]
prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
response = pipe(prompts)
print(response)
```
#### Multi-turn Conversation
There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-2B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
sess = pipe.chat(('describe this image', image), gen_config=gen_config)
print(sess.response.text)
sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
print(sess.response.text)
```
#### Service
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
```shell
lmdeploy serve api_server OpenGVLab/InternVL3-2B --chat-template internvl2_5 --server-port 23333 --tp 1
```
To use the OpenAI-style interface, you need to install OpenAI:
```shell
pip install openai
```
Then, use the code below to make the API call:
```python
from openai import OpenAI
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=[{
'role':
'user',
'content': [{
'type': 'text',
'text': 'describe this image',
}, {
'type': 'image_url',
'image_url': {
'url':
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
},
}],
}],
temperature=0.8,
top_p=0.8)
print(response)
```
## License
This project is released under the MIT License. This project uses the pre-trained Qwen2.5 as a component, which is licensed under the Apache-2.0 License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{wang2024mpo,
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2411.10442},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}
``` |
sergioalves/d0f62694-a9a9-41ee-92fa-739328b8e778 | sergioalves | 2025-04-25T03:15:18Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
]
| null | 2025-04-25T03:04:05Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0f62694-a9a9-41ee-92fa-739328b8e778
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7d91f3d76c4fac08_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7d91f3d76c4fac08_train_data.json
type:
field_instruction: user_status
field_output: user_persona
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/d0f62694-a9a9-41ee-92fa-739328b8e778
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/7d91f3d76c4fac08_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fac4d056-b459-4431-8129-825726a73dfd
wandb_project: s56-8
wandb_run: your_name
wandb_runid: fac4d056-b459-4431-8129-825726a73dfd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d0f62694-a9a9-41ee-92fa-739328b8e778
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8417 | 0.1411 | 200 | 0.8397 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LUcowork/e5_stage1 | LUcowork | 2025-04-25T03:15:08Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:23003",
"loss:TripletLoss",
"dataset:hobbang/stage1-triplet-dataset-selected",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:finetune:intfloat/multilingual-e5-large-instruct",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-04-25T03:12:52Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:23003
- loss:TripletLoss
base_model: intfloat/multilingual-e5-large-instruct
widget:
- source_sentence: The Merlyn.AI SectorSurfer Momentum ETF is designed to dynamically
shift its investment strategy based on market conditions, tracking an index that
utilizes an algorithmic Bull/Bear indicator assessing U.S. equity markets for
advancing trends or elevated decline risk using factors like price-trend, momentum,
value sentiment, and volatility. In Bull markets, it targets approximately a 70/30
domestic/foreign aggressive equity allocation by selecting six thematic ETFs (four
sectors, two geopolitical), while in Bear markets, it seeks safety by choosing
at least four momentum-leading bond, treasury, and gold safe-harbor ETFs, explicitly
avoiding inverse and leveraged funds. The index is typically evaluated monthly,
though the indicator can trigger strategy changes anytime during excessive market
volatility. Under normal circumstances, at least 80% of the fund's assets are
invested in the index's component securities; the fund is non-diversified. Please
be aware this fund is closing, with its last day of trading scheduled for November
10, 2023.
sentences:
- The BlackRock Future Climate and Sustainable Economy ETF (BECO) is an actively
managed equity fund focused on the transition to a lower carbon economy and future
climate themes. It seeks a relatively concentrated, non-diversified portfolio
of globally-listed companies of any market capitalization, investing across multiple
subthemes such as sustainable energy, resource efficiency, future transport, sustainable
nutrition, and biodiversity. The fund utilizes proprietary environmental criteria,
including carbon metrics, and aims to align with the Paris Climate Agreement goals
for net-zero emissions by 2050, while excluding certain high-emission industries
and companies violating the UN Global Compact. It also attempts to achieve a better
aggregate environmental and ESG score than its benchmark, the MSCI ACWI Multiple
Industries Select Index. Note that BECO is being delisted, with its last day of
trading on an exchange scheduled for August 12, 2024.
- The Direxion Daily Semiconductor Bull 3X Shares (SOXL) seeks daily investment
results, before fees and expenses, of 300% of the daily performance of the ICE
Semiconductor Index. To achieve this bullish, leveraged exposure, the fund invests
at least 80% of its net assets in financial instruments, such as swap agreements,
securities of the index, and ETFs that track the index. The underlying ICE Semiconductor
Index is a rules-based, modified float-adjusted market capitalization-weighted
index that tracks the performance of the thirty largest U.S. listed semiconductor
companies. As a daily leveraged fund, SOXL rebalances daily, meaning results over
periods longer than one day can differ significantly from 300% of the index's
performance due to the effects of compounding; the fund is also non-diversified.
- The KraneShares Trust ETF seeks investment results corresponding generally to
the price and yield performance of the Solactive Global Luxury Index. Under normal
circumstances, the fund invests at least 80% of its net assets in instruments
in the underlying index or those with similar economic characteristics. This index
is a modified, free float adjusted market capitalization weighted index designed
to measure the equity performance of companies from global luxury-related sectors,
such as travel & leisure, premium ware, and apparel, located in developed markets.
The index selects the top 25 companies based on criteria including size, trading
volume, and country of listing, applying a modified weighting approach where the
top 5 securities receive higher allocations (with the largest capped at 10%) while
others are capped at 4.5%. The index is rebalanced semi-annually. The fund is
non-diversified and while targeting US investments, it maintains at least 40%
of its assets in foreign entities or those with significant business activities
outside the United States.
- source_sentence: The Xtrackers MSCI Emerging Markets Climate Selection ETF seeks
to track an emerging markets index focused on companies meeting specific climate
criteria. Derived from the MSCI ACWI Select Climate 500 methodology, the underlying
index selects eligible emerging market stocks using an optimization process designed
to reduce greenhouse gas emission intensity (targeting 10% revenue-related and
7% financing-related reductions) and increase exposure to companies with SBTi-approved
targets. The strategy also excludes controversial companies and evaluates companies
based on broader ESG considerations. The fund is non-diversified and invests at
least 80% of its assets in the component securities of this climate-focused emerging
markets index.
sentences:
- The First Trust Indxx NextG UCITS ETF seeks investment results that generally
correspond to the price and yield of the Indxx 5G & NextG Thematic Index. This
tiered-weighted index of global mid- and large-cap equities tracks companies dedicating
significant resources to the research, development, and application of fifth generation
(5G) and emerging next generation digital cellular technologies. The fund normally
invests at least 90% of its net assets in the index's securities, which are primarily
drawn from themes including 5G infrastructure and hardware (such as data/cell
tower REITs and equipment manufacturers) and telecommunication service providers
operating relevant cellular and wireless networks.
- The iPath S&P MLP ETN tracks an S&P Dow Jones index designed to provide exposure
to leading partnerships listed on major U.S. exchanges. Comprising master limited
partnerships (MLPs) and similar publicly traded limited liability companies, these
constituents are primarily classified within the GICS Energy Sector and GICS Gas
Utilities Industry.
- The First Trust NASDAQ ABA Community Bank Index Fund (QABA) seeks investment results
corresponding generally to the NASDAQ OMX® ABA Community Bank TM Index, normally
investing at least 90% of its net assets in the index's securities. The index
tracks NASDAQ-listed US banks and thrifts of small, mid, and large capitalization,
designed to capture the community banking industry. Uniquely, it deliberately
excludes the 50 largest banks by asset size, banks with significant international
operations, and those specializing in credit cards, specifically targeting true
community banks and avoiding larger "mega-money centers." The index is market-cap-weighted
and undergoes regular rebalancing and reconstitution, subject to certain issuer
weight caps.
- source_sentence: The VanEck Morningstar Wide Moat ETF (MOAT) seeks to replicate
the performance of the Morningstar® Wide Moat Focus IndexSM by investing at least
80% of its assets in the index's securities. The fund targets US companies that
Morningstar identifies as having sustainable competitive advantages ("wide moat
companies") based on a proprietary methodology considering quantitative and qualitative
factors. Specifically, the index focuses on companies determined to have the highest
fair value among these wide moat firms. MOAT holds a concentrated, equal-weighted
portfolio, which typically involves around 40 names but can hold more, featuring
a staggered rebalance schedule and potential sector biases. The fund is non-diversified
and employs caps on turnover and sector exposure, resulting in a strategy that
can significantly diverge from broader market coverage despite its focus on established
companies with competitive advantages.
sentences:
- The Fidelity MSCI Industrials Index ETF (FIDU) aims to match the performance of
the MSCI USA IMI Industrials 25/25 Index, which represents the broad U.S. industrial
sector using a market-cap-weighted approach with a 25/25 capping methodology.
The fund, launched in October 2013, provides plain-vanilla exposure and invests
at least 80% of its assets in securities found within this index. It uses a representative
sampling strategy rather than replicating the entire index, and the underlying
index is rebalanced quarterly.
- The KraneShares Electric Vehicles and Future Mobility Index ETF (KARS) seeks to
track the price and yield performance of the Bloomberg Electric Vehicles Index
by investing at least 80% of its net assets in corresponding instruments or those
with similar economic characteristics. The underlying index is designed to measure
the equity market performance of globally-listed companies significantly involved
in the production of electric vehicles, components, or other initiatives enhancing
future mobility, including areas like energy storage, autonomous navigation technology,
lithium and copper mining, and hydrogen fuel cells. KARS holds a concentrated
portfolio, typically around 32 companies, weighted by market capitalization subject
to specific position caps, and is reconstituted and rebalanced quarterly.
- The iPath S&P MLP ETN tracks an S&P Dow Jones index designed to provide exposure
to leading partnerships listed on major U.S. exchanges. Comprising master limited
partnerships (MLPs) and similar publicly traded limited liability companies, these
constituents are primarily classified within the GICS Energy Sector and GICS Gas
Utilities Industry.
- source_sentence: The Global X Clean Water ETF (AQWA) seeks to provide exposure to
the global water industry by tracking the Solactive Global Clean Water Industry
Index. The fund invests at least 80% of its assets in securities of this index,
which targets companies deriving a significant portion (at least 50%) of their
revenue from water infrastructure, equipment, and services, including treatment,
purification, conservation, and management. The index selection process uses proprietary
technology like NLP to identify eligible firms, incorporates minimum ESG standards
based on UN Global Compact principles, and includes the 40 highest-ranking companies,
weighted by market capitalization with specific caps. Reconstituted and rebalanced
semi-annually, the fund is considered non-diversified.
sentences:
- The First Trust Nasdaq Transportation ETF aims to track the Nasdaq US Smart Transportation
TM Index, investing at least 90% of its net assets in the index's securities.
This non-diversified fund provides exposure to a concentrated portfolio of approximately
30 highly liquid U.S. transportation companies across various segments such as
delivery, shipping, marine, railroads, trucking, airports, airlines, bridges,
tunnels, and automobiles. The index selects companies based on liquidity and then
ranks and weights them according to factors reflecting growth (price returns),
value (cash flow-to-price), and low volatility, ensuring no single constituent
exceeds 8%. The index undergoes annual reconstitution and quarterly rebalancing.
- The Direxion Daily Healthcare Bull 3X Shares (CURE) is an ETF that seeks daily
investment results, before fees and expenses, of 300% (3X) of the daily performance
of the Health Care Select Sector Index. It invests at least 80% of its net assets
in financial instruments designed to provide this 3X daily leveraged exposure.
The underlying index tracks US listed healthcare companies, including pharmaceuticals,
health care equipment and supplies, providers and services, biotechnology, life
sciences tools, and health care technology, covering major large-cap names. CURE
is non-diversified and intended strictly as a short-term tactical instrument,
as it delivers its stated 3X exposure only for a single day, and returns over
longer periods can significantly differ from three times the index's performance.
- The BlackRock Future Climate and Sustainable Economy ETF (BECO) is an actively
managed equity fund focused on the transition to a lower carbon economy and future
climate themes. It seeks a relatively concentrated, non-diversified portfolio
of globally-listed companies of any market capitalization, investing across multiple
subthemes such as sustainable energy, resource efficiency, future transport, sustainable
nutrition, and biodiversity. The fund utilizes proprietary environmental criteria,
including carbon metrics, and aims to align with the Paris Climate Agreement goals
for net-zero emissions by 2050, while excluding certain high-emission industries
and companies violating the UN Global Compact. It also attempts to achieve a better
aggregate environmental and ESG score than its benchmark, the MSCI ACWI Multiple
Industries Select Index. Note that BECO is being delisted, with its last day of
trading on an exchange scheduled for August 12, 2024.
- source_sentence: The Horizon Kinetics Medical ETF (MEDX) is an actively-managed,
non-diversified fund aiming for long-term capital growth by investing primarily
in global companies (U.S. and foreign) within the medical research, pharmaceuticals,
medical technology, and related industries. The fund typically focuses on companies
generating at least 50% of their revenue from these areas and may include companies
of any market capitalization, with an emphasis on those involved in cancer research
and treatment. Under normal circumstances, at least 80% of assets are invested
in equity securities, convertibles, and warrants of such companies. Portfolio
selection and weighting are based on the adviser's evaluation and discretion.
The fund may also temporarily invest up to 100% in US short-term debt or invest
in non-convertible high-yield bonds.
sentences:
- The Fidelity MSCI Health Care Index ETF (FHLC) seeks to track the performance
of the MSCI USA IMI Health Care 25/50 Index, which represents the broad U.S. health
care sector. The ETF invests at least 80% of its assets in securities included
in this market-cap-weighted index, which captures large, mid, and small-cap companies
across over 10 subsectors. Employing a representative sampling strategy, the fund
aims to correspond to the index's performance. The index incorporates a 25/50
capping methodology, is rebalanced quarterly, and its broad reach offers diversification
across cap sizes and subsectors, potentially reducing concentration in dominant
large pharma names and increasing exposure to areas like drug retailers and insurance.
The fund is classified as non-diversified.
- The SPDR S&P Oil & Gas Equipment & Services ETF (XES) seeks investment results
corresponding generally to the total return performance of the S&P Oil & Gas Equipment
& Services Select Industry Index. This index represents companies in the oil and
gas equipment and services segment of the broad U.S. S&P Total Market Index (S&P
TMI), including those involved in activities like wildcatting, drilling hardware,
and related services. The index utilizes an equal-weighting methodology for its
constituent companies, which are selected based on market capitalization and liquidity
requirements and undergo quarterly rebalancing. The fund itself employs a sampling
strategy, aiming to invest at least 80% of its total assets in the securities
that comprise its benchmark index.
- The VanEck Biotech ETF (BBH) seeks to replicate the performance of the MVIS® US
Listed Biotech 25 Index, which provides exposure to approximately 25 of the largest
or leading U.S.-listed companies in the biotechnology industry. The fund normally
invests at least 80% of its assets in securities comprising this market-cap-weighted
index. The underlying index includes common stocks and depositary receipts of
firms involved in the research, development, production, marketing, and sale of
drugs based on genetic analysis and diagnostic equipment. While focusing on U.S.-listed
companies, it may include foreign firms listed domestically, and medium-capitalization
companies can be included. Reflecting the index's concentration, the fund is non-diversified
and may have a top-heavy portfolio. The index is reviewed semi-annually.
datasets:
- hobbang/stage1-triplet-dataset-selected
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on intfloat/multilingual-e5-large-instruct
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the [stage1-triplet-dataset-selected](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset-selected) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision 84344a23ee1820ac951bc365f1e91d094a911763 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [stage1-triplet-dataset-selected](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset-selected)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"The Horizon Kinetics Medical ETF (MEDX) is an actively-managed, non-diversified fund aiming for long-term capital growth by investing primarily in global companies (U.S. and foreign) within the medical research, pharmaceuticals, medical technology, and related industries. The fund typically focuses on companies generating at least 50% of their revenue from these areas and may include companies of any market capitalization, with an emphasis on those involved in cancer research and treatment. Under normal circumstances, at least 80% of assets are invested in equity securities, convertibles, and warrants of such companies. Portfolio selection and weighting are based on the adviser's evaluation and discretion. The fund may also temporarily invest up to 100% in US short-term debt or invest in non-convertible high-yield bonds.",
"The VanEck Biotech ETF (BBH) seeks to replicate the performance of the MVIS® US Listed Biotech 25 Index, which provides exposure to approximately 25 of the largest or leading U.S.-listed companies in the biotechnology industry. The fund normally invests at least 80% of its assets in securities comprising this market-cap-weighted index. The underlying index includes common stocks and depositary receipts of firms involved in the research, development, production, marketing, and sale of drugs based on genetic analysis and diagnostic equipment. While focusing on U.S.-listed companies, it may include foreign firms listed domestically, and medium-capitalization companies can be included. Reflecting the index's concentration, the fund is non-diversified and may have a top-heavy portfolio. The index is reviewed semi-annually.",
'The SPDR S&P Oil & Gas Equipment & Services ETF (XES) seeks investment results corresponding generally to the total return performance of the S&P Oil & Gas Equipment & Services Select Industry Index. This index represents companies in the oil and gas equipment and services segment of the broad U.S. S&P Total Market Index (S&P TMI), including those involved in activities like wildcatting, drilling hardware, and related services. The index utilizes an equal-weighting methodology for its constituent companies, which are selected based on market capitalization and liquidity requirements and undergo quarterly rebalancing. The fund itself employs a sampling strategy, aiming to invest at least 80% of its total assets in the securities that comprise its benchmark index.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### stage1-triplet-dataset-selected
* Dataset: [stage1-triplet-dataset-selected](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset-selected) at [18e0423](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset-selected/tree/18e0423399bc6678e814264ca8c8acdf02dfce97)
* Size: 23,003 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 94 tokens</li><li>mean: 170.87 tokens</li><li>max: 224 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 174.15 tokens</li><li>max: 261 tokens</li></ul> | <ul><li>min: 72 tokens</li><li>mean: 174.89 tokens</li><li>max: 261 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>The FlexShares ESG & Climate Investment Grade Corporate Core Index Fund (FEIG) is a passively managed ETF designed to provide broad-market, core exposure to USD-denominated investment-grade corporate bonds. It seeks to track the performance of the Northern Trust ESG & Climate Investment Grade U.S. Corporate Core IndexSM, which selects bonds from a universe of USD-denominated, investment-grade corporate debt with maturities of at least one year. The index employs an optimization process to increase the aggregate ESG score and reduce aggregate climate-related risk among constituent companies, involving ranking firms on material ESG metrics, governance, and carbon risks, while excluding controversial companies and international initiative violators. Weights are also optimized to minimize systematic risk, and the index is rebalanced monthly. Under normal circumstances, the fund invests at least 80% of its assets in the index's securities.</code> | <code>The Viridi Bitcoin Miners ETF primarily invests in companies engaged in Bitcoin mining, aiming to allocate at least 80% of its net assets, plus borrowings for investment purposes, to securities of such companies under normal circumstances. The fund focuses on U.S. and non-U.S. equity securities in developed markets, which may include investments via depositary receipts. It also specifically targets common stock from newly listed IPOs, shares derived from SPAC IPOs, and securities resulting from reverse mergers. This ETF is non-diversified.</code> |
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>The Fidelity Sustainable High Yield ETF (FSYD) is an actively managed fund primarily seeking high income, and potentially capital growth, by investing at least 80% of its assets in global high-yield (below investment grade) debt securities. The fund focuses on issuers demonstrating proven or improving sustainability practices based on an evaluation of their individual environmental, social, and governance (ESG) profiles using a proprietary rating process. Its comprehensive selection approach also incorporates a multi-factor quantitative screening model and fundamental analysis of issuers, aiming to identify value and quality within the high-yield universe.</code> | <code>The ETFMG Prime Mobile Payments ETF seeks to track the performance of the Nasdaq CTA Global Digital Payments Index, which identifies companies engaged in the global digital payments industry across categories like card networks, infrastructure, software, processors, and solutions. Under normal circumstances, the fund invests at least 80% of its net assets in common stocks (including ADRs and GDRs) of these Mobile Payments Companies. It typically holds a narrow portfolio expected to contain up to 50 companies, weighted using a theme-adjusted market capitalization scheme, and is considered non-diversified.</code> |
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>The First Trust TCW Securitized Plus ETF (DEED) is an actively-managed fund focused on U.S. securitized debt securities, aiming to maximize long-term total return and outperform the Bloomberg US Mortgage-Backed Securities Index. Under normal market conditions, the fund allocates at least 80% of its net assets to securitized debt, including asset-backed securities, residential and commercial mortgage-backed securities, and collateralized loan obligations (CLOs). At least 50% of total assets are invested in securities issued or guaranteed by the U.S. government, its agencies, or government-sponsored entities, while the balance may include non-government and privately-issued securitized debt. The fund invests across various maturities and credit qualities (junk and investment-grade), using proprietary research to identify undervalued securities, and may utilize OTC derivatives for up to 25% of the portfolio.</code> | <code>The First Trust Growth Strength UCITS ETF aims to track the price and yield of The Growth Strength Index. Passively managed, the fund normally invests at least 80% of its assets in the index's common stocks and REIT components. The index selects 50 equal-weighted, well-capitalized, large-cap US companies from the top 500 US securities by market capitalization based on fundamental criteria such as return on equity, long-term debt levels, liquidity, positive shareholder equity, and a composite ranking based on 3-year revenue and cash flow growth. The resulting portfolio is non-diversified and rebalanced quarterly.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.COSINE",
"triplet_margin": 0.05
}
```
### Evaluation Dataset
#### stage1-triplet-dataset-selected
* Dataset: [stage1-triplet-dataset-selected](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset-selected) at [18e0423](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset-selected/tree/18e0423399bc6678e814264ca8c8acdf02dfce97)
* Size: 388 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 388 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 85 tokens</li><li>mean: 176.98 tokens</li><li>max: 271 tokens</li></ul> | <ul><li>min: 85 tokens</li><li>mean: 176.83 tokens</li><li>max: 271 tokens</li></ul> | <ul><li>min: 85 tokens</li><li>mean: 175.41 tokens</li><li>max: 271 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>The U.S. Global Technology and Aerospace & Defense ETF is an actively managed ETF seeking capital appreciation by investing in equity securities of companies expected to benefit from national defense efforts. These efforts include technological innovations and the development of products and services related to aerospace, physical, and cybersecurity defense, often in preparation for or in response to domestic, regional, or global conflicts. The fund is non-diversified.</code> | <code>The KraneShares Global Carbon Offset Strategy ETF (KSET) was the first US-listed ETF providing exposure to the global voluntary carbon market. It achieved this by investing primarily in liquid carbon offset credit futures, including CME-traded Global Emissions Offsets (GEOs) and Nature-Based Global Emission Offsets (N-GEOs), which are designed to help businesses meet greenhouse gas reduction goals. Tracking an index that weighted eligible futures based on liquidity, the fund sought exposure to the same carbon offset credit futures, typically those maturing within two years. The ETF was considered non-diversified and utilized a Cayman Island subsidiary. However, the fund was delisted, with its last day of trading on an exchange being March 14, 2024.</code> |
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>The JPMorgan Social Advancement ETF (UPWD) is an actively managed, non-diversified fund that seeks to invest globally in companies facilitating social and economic advancements and empowerment across the socioeconomic spectrum. Primarily holding common stocks, depositary receipts, and REITs, the fund targets themes including essential amenities, affordable housing, healthcare, education, attainable financing, and the digital ecosystem, potentially investing in companies of various sizes, including small-caps, across U.S., foreign, and emerging markets with possible concentration in specific sectors. Security selection follows a proprietary three-step process involving exclusions, thematic ranking using a ThemeBot, and a sustainable investment inclusion process combined with fundamental research. Please note that this security is being delisted, with its last day of trading scheduled for December 15, 2023.</code> | <code>The Direxion Daily Gold Miners Index Bull 2X Shares (NUGT) is designed to provide 200% of the daily performance of the NYSE Arca Gold Miners Index, before fees and expenses. This market-cap-weighted index comprises publicly traded global companies, primarily involved in gold mining and to a lesser extent silver mining, operating in both developed and emerging markets. NUGT achieves its objective by investing at least 80% of its net assets in financial instruments providing 2X daily leveraged exposure to the index. As a leveraged fund intended for daily results, NUGT is designed for short-term trading, typically held for only one trading day, and holding it for longer periods can lead to performance results that differ significantly from the stated daily target due to the effects of compounding. The fund is also non-diversified.</code> |
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>The Xtrackers MSCI Emerging Markets ESG Leaders Equity ETF tracks an index of large- and mid-cap emerging market stocks that emphasize strong environmental, social, and governance (ESG) characteristics. The index first excludes companies involved in specific controversial industries. From the remaining universe, it ranks stocks based on MSCI ESG scores, including a controversy component, to identify and select the highest-ranking ESG leaders, effectively screening out ESG laggards. To maintain market-like country and sector weights, the index selects the top ESG-scoring stocks within each sector until a specified market capitalization threshold is reached. Selected stocks are then weighted by market capitalization within their respective sectors. The fund typically invests over 80% of its assets in the securities of this underlying index.</code> | <code>The BlackRock Future Climate and Sustainable Economy ETF (BECO) is an actively managed equity fund focused on the transition to a lower carbon economy and future climate themes. It seeks a relatively concentrated, non-diversified portfolio of globally-listed companies of any market capitalization, investing across multiple subthemes such as sustainable energy, resource efficiency, future transport, sustainable nutrition, and biodiversity. The fund utilizes proprietary environmental criteria, including carbon metrics, and aims to align with the Paris Climate Agreement goals for net-zero emissions by 2050, while excluding certain high-emission industries and companies violating the UN Global Compact. It also attempts to achieve a better aggregate environmental and ESG score than its benchmark, the MSCI ACWI Multiple Industries Select Index. Note that BECO is being delisted, with its last day of trading on an exchange scheduled for August 12, 2024.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.COSINE",
"triplet_margin": 0.05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-06
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `bf16`: True
- `dataloader_drop_last`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:-------:|:-------------:|:---------------:|
| 0.0139 | 10 | 0.0367 | - |
| 0.0279 | 20 | 0.0378 | - |
| 0.0418 | 30 | 0.0346 | - |
| 0.0557 | 40 | 0.0337 | - |
| 0.0696 | 50 | 0.0328 | - |
| 0.0836 | 60 | 0.0291 | - |
| 0.0975 | 70 | 0.0257 | - |
| 0.1114 | 80 | 0.0206 | - |
| 0.1253 | 90 | 0.0201 | - |
| 0.1393 | 100 | 0.0208 | 0.0132 |
| 0.1532 | 110 | 0.0167 | - |
| 0.1671 | 120 | 0.0167 | - |
| 0.1811 | 130 | 0.0156 | - |
| 0.1950 | 140 | 0.0153 | - |
| 0.2089 | 150 | 0.0125 | - |
| 0.2228 | 160 | 0.0141 | - |
| 0.2368 | 170 | 0.0153 | - |
| 0.2507 | 180 | 0.0142 | - |
| 0.2646 | 190 | 0.0095 | - |
| 0.2786 | 200 | 0.0144 | 0.0111 |
| 0.2925 | 210 | 0.0132 | - |
| 0.3064 | 220 | 0.0107 | - |
| 0.3203 | 230 | 0.0116 | - |
| 0.3343 | 240 | 0.0134 | - |
| 0.3482 | 250 | 0.0112 | - |
| 0.3621 | 260 | 0.0115 | - |
| 0.3760 | 270 | 0.0124 | - |
| 0.3900 | 280 | 0.0126 | - |
| 0.4039 | 290 | 0.0105 | - |
| 0.4178 | 300 | 0.0111 | 0.0109 |
| 0.4318 | 310 | 0.0136 | - |
| 0.4457 | 320 | 0.0123 | - |
| 0.4596 | 330 | 0.0113 | - |
| 0.4735 | 340 | 0.0125 | - |
| 0.4875 | 350 | 0.0082 | - |
| 0.5014 | 360 | 0.0102 | - |
| 0.5153 | 370 | 0.0081 | - |
| 0.5292 | 380 | 0.0115 | - |
| 0.5432 | 390 | 0.0107 | - |
| 0.5571 | 400 | 0.012 | 0.0106 |
| 0.5710 | 410 | 0.0094 | - |
| 0.5850 | 420 | 0.0099 | - |
| 0.5989 | 430 | 0.0105 | - |
| 0.6128 | 440 | 0.0101 | - |
| 0.6267 | 450 | 0.0099 | - |
| 0.6407 | 460 | 0.0106 | - |
| 0.6546 | 470 | 0.0099 | - |
| 0.6685 | 480 | 0.0108 | - |
| 0.6825 | 490 | 0.01 | - |
| **0.6964** | **500** | **0.0084** | **0.0102** |
| 0.7103 | 510 | 0.0092 | - |
| 0.7242 | 520 | 0.0084 | - |
| 0.7382 | 530 | 0.0077 | - |
| 0.7521 | 540 | 0.0096 | - |
| 0.7660 | 550 | 0.0099 | - |
| 0.7799 | 560 | 0.0103 | - |
| 0.7939 | 570 | 0.0082 | - |
| 0.8078 | 580 | 0.009 | - |
| 0.8217 | 590 | 0.0078 | - |
| 0.8357 | 600 | 0.0091 | 0.0104 |
| 0.8496 | 610 | 0.0088 | - |
| 0.8635 | 620 | 0.0103 | - |
| 0.8774 | 630 | 0.0109 | - |
| 0.8914 | 640 | 0.0072 | - |
| 0.9053 | 650 | 0.0084 | - |
| 0.9192 | 660 | 0.0099 | - |
| 0.9331 | 670 | 0.008 | - |
| 0.9471 | 680 | 0.0081 | - |
| 0.9610 | 690 | 0.0075 | - |
| 0.9749 | 700 | 0.0096 | 0.0103 |
| 0.9889 | 710 | 0.0089 | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.1.0+cu118
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
ArtemisTAO/lam14 | ArtemisTAO | 2025-04-25T03:13:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:13:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
7-Sophie-Rain-Spiderman-Viral-leaks-Videos/Viral-Link.Sophie.Rain.Spiderman.Viral.Video.Leaks.official.HD | 7-Sophie-Rain-Spiderman-Viral-leaks-Videos | 2025-04-25T03:12:58Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-25T03:08:32Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
cmlformer/xlm-roberta-L3Cube-HingCorpus10K | cmlformer | 2025-04-25T03:11:48Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-25T03:11:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
atansor/c01efd53-77c1-454b-801c-c110f4cd2060 | atansor | 2025-04-25T03:11:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Solar-10b-32k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-32k",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-25T02:17:26Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Solar-10b-32k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c01efd53-77c1-454b-801c-c110f4cd2060
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Solar-10b-32k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ec53ac9495e83668_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ec53ac9495e83668_train_data.json
type:
field_input: system_prompt
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: atansor/c01efd53-77c1-454b-801c-c110f4cd2060
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ec53ac9495e83668_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9dc1efa-2c8b-4ecb-b111-c48b3e69fb85
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9dc1efa-2c8b-4ecb-b111-c48b3e69fb85
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c01efd53-77c1-454b-801c-c110f4cd2060
This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-32k](https://huggingface.co/NousResearch/Yarn-Solar-10b-32k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0003 | 3 | nan |
| 0.0 | 0.0005 | 6 | nan |
| 0.0 | 0.0008 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Ans0nWr0ng/llama3.1-8b-cantonese_gguf_v3 | Ans0nWr0ng | 2025-04-25T03:11:23Z | 94 | 1 | null | [
"gguf",
"text-generation",
"dataset:stvlynn/Cantonese-Dialogue",
"dataset:hon9kon9ize/yue-alpaca",
"dataset:cantonesesra/Cantonese_AllAspectQA_11K",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-04-09T01:57:45Z | ---
license: llama3.1
datasets:
- stvlynn/Cantonese-Dialogue
- hon9kon9ize/yue-alpaca
- cantonesesra/Cantonese_AllAspectQA_11K
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
--- |
KingEmpire/sn9_pretc4_2504_1 | KingEmpire | 2025-04-25T03:10:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:02:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/b2_science_fasttext_neg_all_1k | mlfoundations-dev | 2025-04-25T03:10:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T02:18:36Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_fasttext_neg_all_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_fasttext_neg_all_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_neg_all_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
kokovova/30227d13-f5a0-44bd-980b-4a818385f65b | kokovova | 2025-04-25T03:09:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-25T03:04:25Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 30227d13-f5a0-44bd-980b-4a818385f65b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 7d91f3d76c4fac08_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7d91f3d76c4fac08_train_data.json
type:
field_instruction: user_status
field_output: user_persona
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/30227d13-f5a0-44bd-980b-4a818385f65b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/7d91f3d76c4fac08_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fac4d056-b459-4431-8129-825726a73dfd
wandb_project: s56-4
wandb_run: your_name
wandb_runid: fac4d056-b459-4431-8129-825726a73dfd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 30227d13-f5a0-44bd-980b-4a818385f65b
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8647 | 0.1411 | 200 | 0.8620 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
callgg/framepack | callgg | 2025-04-25T03:08:27Z | 56 | 1 | diffusers | [
"diffusers",
"safetensors",
"region:us"
]
| null | 2025-04-23T21:43:08Z | ---
library_name: diffusers
---
## framepack
- repackage of hy from [lllyasviel](https://huggingface.co/lllyasviel/FramePackI2V_HY) |
7-Sophie-Rain-Spiderman-Viral-leaks-Videos/Original-Viral-Link.Sophie.Rain.Spiderman.Viral.Video.Leaks.official.HD | 7-Sophie-Rain-Spiderman-Viral-leaks-Videos | 2025-04-25T03:07:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-04-25T03:06:38Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Zack-Z/llama31_8bi_CoTsft_rs0_1_5cut_gem3all_e2 | Zack-Z | 2025-04-25T03:07:31Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-01T12:36:33Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fax4ever/culturalitems-roberta-base-5 | fax4ever | 2025-04-25T03:05:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-25T03:05:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LUcowork/mpnet_stage1 | LUcowork | 2025-04-25T03:04:11Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:23175",
"loss:TripletLoss",
"dataset:hobbang/stage1-triplet-dataset",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-04-25T02:55:16Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:23175
- loss:TripletLoss
base_model: sentence-transformers/all-mpnet-base-v2
widget:
- source_sentence: The First Trust Nasdaq Bank ETF (FTXO) seeks to replicate the performance
of the Nasdaq US Smart Banks TM Index by investing at least 90% of its assets
in the index's securities. This fund provides exposure to U.S. banking companies,
selecting the most liquid stocks and ranking/weighting them based on factors including
trailing volatility, value (cash flow to price), and growth (price returns). The
index typically holds around 30 liquid U.S. banking companies across retail banking,
loans, and financial services, with an 8% cap on any single holding. The fund
is non-diversified, and the index undergoes annual reconstitution and quarterly
rebalancing.
sentences:
- The iShares Evolved U.S. Media and Entertainment ETF seeks to invest in U.S. listed
common stocks of large-, mid-, and small-capitalization companies within the media
and entertainment sector. Following an "Evolved" approach, the fund selects companies
belonging to the Media and Entertainment Evolved Sector based on economic characteristics
historically correlated with traditional sector definitions. Under normal circumstances,
it allocates at least 80% of its net assets to these stocks, and the fund is non-diversified.
- The Direxion Daily Healthcare Bull 3X Shares (CURE) is an ETF that seeks daily
investment results, before fees and expenses, of 300% (3X) of the daily performance
of the Health Care Select Sector Index. It invests at least 80% of its net assets
in financial instruments designed to provide this 3X daily leveraged exposure.
The underlying index tracks US listed healthcare companies, including pharmaceuticals,
health care equipment and supplies, providers and services, biotechnology, life
sciences tools, and health care technology, covering major large-cap names. CURE
is non-diversified and intended strictly as a short-term tactical instrument,
as it delivers its stated 3X exposure only for a single day, and returns over
longer periods can significantly differ from three times the index's performance.
- The Xtrackers MSCI Emerging Markets Climate Selection ETF seeks to track an emerging
markets index focused on companies meeting specific climate criteria. Derived
from the MSCI ACWI Select Climate 500 methodology, the underlying index selects
eligible emerging market stocks using an optimization process designed to reduce
greenhouse gas emission intensity (targeting 10% revenue-related and 7% financing-related
reductions) and increase exposure to companies with SBTi-approved targets. The
strategy also excludes controversial companies and evaluates companies based on
broader ESG considerations. The fund is non-diversified and invests at least 80%
of its assets in the component securities of this climate-focused emerging markets
index.
- source_sentence: The iShares S&P Small-Cap 600 Value ETF (IJS) seeks to track the
investment results of the S&P SmallCap 600 Value Index, which consists of U.S.
small-capitalization equities exhibiting value characteristics. This index selects
value stocks from the S&P SmallCap 600 using factors such as book value to price,
earnings to price, and sales to price ratios. The fund generally invests at least
80% of its assets in the component securities of its underlying index and may
invest up to 20% in certain futures, options, swap contracts, cash, and cash equivalents.
The underlying index undergoes annual rebalancing in December.
sentences:
- The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk
Managed Income Index by investing at least 80% of its assets in index securities.
The index's strategy involves holding the underlying stocks of the S&P 500 Index
while applying an options collar, specifically selling at-the-money covered call
options and buying monthly 5% out-of-the-money put options corresponding to the
portfolio's value. This approach aims to generate income, ideally resulting in
a net credit from the options premiums, and provide risk management, though selling
at-the-money calls inherently caps the fund's potential for upside participation.
- The Amplify International Enhanced Dividend Income ETF (IDVO), an actively managed
fund recently updated to include CWP in its name, seeks to provide current income
primarily and capital appreciation secondarily. The fund invests at least 80%
of its assets in dividend-paying U.S. exchange-traded American depositary receipt
(ADR) securities representing companies located outside the U.S., focusing on
high-quality, large-cap constituents from the MSCI ACWI ex USA Index to offer
international equity exposure in a domestic wrapper. It enhances income generation
by opportunistically utilizing a tactical strategy of writing (selling) short-term,
U.S. exchange-traded covered call option contracts on some or all of its individual
holdings, targeting income from both dividends and option premiums. While aiming
for country and sector diversification by selecting approximately 30-50 stocks,
the fund is classified as non-diversified.
- The Strive Emerging Markets Ex-China ETF seeks to track the total return performance
of the Bloomberg Emerging Markets ex China Large & Mid Cap Index. This index comprises
large and mid-capitalization equity securities from 24 emerging market economies,
specifically excluding China. The index is market cap-weighted, includes common
stocks and real estate investment trusts, and is rebalanced quarterly and reconstituted
semi-annually. Under normal circumstances, the fund invests at least 80% of its
assets in these emerging market securities, which may include depositary receipts
representing securities included in the index.
- source_sentence: The Fidelity MSCI Health Care Index ETF (FHLC) seeks to track the
performance of the MSCI USA IMI Health Care 25/50 Index, which represents the
broad U.S. health care sector. The ETF invests at least 80% of its assets in securities
included in this market-cap-weighted index, which captures large, mid, and small-cap
companies across over 10 subsectors. Employing a representative sampling strategy,
the fund aims to correspond to the index's performance. The index incorporates
a 25/50 capping methodology, is rebalanced quarterly, and its broad reach offers
diversification across cap sizes and subsectors, potentially reducing concentration
in dominant large pharma names and increasing exposure to areas like drug retailers
and insurance. The fund is classified as non-diversified.
sentences:
- The SPDR S&P Health Care Equipment ETF (XHE) tracks the equal-weighted S&P Health
Care Equipment Select Industry Index, which is derived from the U.S. total market
and provides exposure to U.S. health care equipment and supplies companies. Employing
a sampling strategy, the fund invests at least 80% of its assets in the index's
securities, which are rebalanced quarterly. While encompassing companies of all
cap sizes, the equal-weight methodology gives XHE a significant small-cap tilt,
offering focused access to this narrow segment as an alternative for investors
seeking to avoid the concentration found in broader, market-cap-weighted healthcare
funds dominated by large pharmaceuticals or service providers.
- The Global X Silver Miners ETF (SIL) seeks to provide investment results that
correspond generally to the price and yield performance of the Solactive Global
Silver Miners Total Return Index. This index is designed to measure the broad-based
equity market performance of global companies primarily involved in the silver
mining industry, including related activities like exploration and refining. The
fund invests at least 80% of its total assets in the securities of this underlying
index and related American and Global Depositary Receipts. The index is market-cap-weighted
and typically comprises 20-40 stocks, while the fund itself is considered non-diversified.
- The Invesco S&P 500 Equal Weight Energy ETF (RSPG) is a large-cap sector fund
tracking an equal-weighted index comprising U.S. energy companies within the S&P
500 Index, classified according to the Global Industry Classification Standard
(GICS). The ETF aims to invest at least 90% of its total assets in securities
from this underlying index, which applies an equal-weighting methodology and rebalances
quarterly. The index also includes a rule to ensure a minimum of 22 constituents,
incorporating the largest energy companies from the S&P MidCap 400 Index if necessary
to meet this count.
- source_sentence: The VictoryShares Top Veteran Employers ETF (VTRN) was designed
to track the Veterans Select Index, focusing on US-listed companies of any market
capitalization that demonstrated support for US military veterans, service members,
and their families primarily through employment opportunities and related policies.
These companies were identified based on various sources like rankings and surveys
and were typically weighted equally in the index. However, this fund is liquidating,
and its last day of trading was October 11, 2021.
sentences:
- The Invesco S&P 500 Equal Weight Industrials ETF (RSPN) tracks an equal-weighted
index of U.S. industrial stocks drawn from the S&P 500 Index, specifically focusing
on companies classified within the industrials sector according to the Global
Industry Classification Standard (GICS). The fund generally invests at least 90%
of its assets in these securities. This equal-weighting scheme offers a non-traditional
approach compared to market-cap weighting, reducing the dominance of large-cap
industrial conglomerates and lowering the portfolio's weighted average market
capitalization. The underlying index is rebalanced on a quarterly basis.
- The SP Funds Dow Jones Global Sukuk ETF (SPSK) is a passively managed fund designed
to track the performance, before fees and expenses, of the Dow Jones Sukuk Total
Return (ex-Reinvestment) Index. This index focuses on U.S. dollar-denominated,
investment-grade sukuk, which are financial certificates similar to bonds, issued
in global markets and structured to comply with Islamic religious law (Sharia)
and its investment principles. Sharia compliance involves screening securities
to exclude businesses such as tobacco, pornography, gambling, and interest-based
finance, and issuers may include international financial institutions and foreign
governments or agencies, including from emerging markets. Under normal circumstances,
the fund attempts to invest substantially all (at least 80%) of its assets in
the index's component securities, which are reconstituted and rebalanced monthly.
The ETF is considered non-diversified.
- The ALUM ETF, part of the USCF ETF Trust, is an actively managed fund utilizing
a proprietary methodology to seek exposure to the price of aluminum through aluminum-based
derivative investments. It primarily invests in aluminum futures but may also
use cash-settled options, forward contracts, options on futures, and other options
traded on US and non-US exchanges. The fund operates through a wholly owned Cayman
Islands subsidiary to avoid issuing K-1 forms and may hold cash, cash equivalents,
or investment grade fixed-income securities as collateral. This non-diversified
fund is currently being delisted, with its last day of trading on an exchange
scheduled for October 11, 2024.
- source_sentence: 'The Sprott Gold Miners ETF (SGDM) seeks to track the performance
of the Solactive Gold Miners Custom Factors Total Return Index. This index focuses
on gold mining companies based in the U.S. and Canada whose shares trade on the
Toronto Stock Exchange, New York Stock Exchange, or NASDAQ. The index employs
a weighting methodology that begins with market capitalization and then adjusts
based on three fundamental factors: higher revenue growth, lower debt-to-equity,
and higher free cash flow yield. The fund is non-diversified and normally invests
at least 90% of its net assets in securities included in this index.'
sentences:
- 'The Sprott Gold Miners ETF (SGDM) seeks to track the performance of the Solactive
Gold Miners Custom Factors Total Return Index. This index focuses on gold mining
companies based in the U.S. and Canada whose shares trade on the Toronto Stock
Exchange, New York Stock Exchange, or NASDAQ. The index employs a weighting methodology
that begins with market capitalization and then adjusts based on three fundamental
factors: higher revenue growth, lower debt-to-equity, and higher free cash flow
yield. The fund is non-diversified and normally invests at least 90% of its net
assets in securities included in this index.'
- The VanEck Biotech ETF (BBH) seeks to replicate the performance of the MVIS® US
Listed Biotech 25 Index, which provides exposure to approximately 25 of the largest
or leading U.S.-listed companies in the biotechnology industry. The fund normally
invests at least 80% of its assets in securities comprising this market-cap-weighted
index. The underlying index includes common stocks and depositary receipts of
firms involved in the research, development, production, marketing, and sale of
drugs based on genetic analysis and diagnostic equipment. While focusing on U.S.-listed
companies, it may include foreign firms listed domestically, and medium-capitalization
companies can be included. Reflecting the index's concentration, the fund is non-diversified
and may have a top-heavy portfolio. The index is reviewed semi-annually.
- The KraneShares Global Carbon Offset Strategy ETF (KSET) was the first US-listed
ETF providing exposure to the global voluntary carbon market. It achieved this
by investing primarily in liquid carbon offset credit futures, including CME-traded
Global Emissions Offsets (GEOs) and Nature-Based Global Emission Offsets (N-GEOs),
which are designed to help businesses meet greenhouse gas reduction goals. Tracking
an index that weighted eligible futures based on liquidity, the fund sought exposure
to the same carbon offset credit futures, typically those maturing within two
years. The ETF was considered non-diversified and utilized a Cayman Island subsidiary.
However, the fund was delisted, with its last day of trading on an exchange being
March 14, 2024.
datasets:
- hobbang/stage1-triplet-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on the [stage1-triplet-dataset](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 12e86a3c702fc3c50205a8db88f0ec7c0b6b94a0 -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [stage1-triplet-dataset](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The Sprott Gold Miners ETF (SGDM) seeks to track the performance of the Solactive Gold Miners Custom Factors Total Return Index. This index focuses on gold mining companies based in the U.S. and Canada whose shares trade on the Toronto Stock Exchange, New York Stock Exchange, or NASDAQ. The index employs a weighting methodology that begins with market capitalization and then adjusts based on three fundamental factors: higher revenue growth, lower debt-to-equity, and higher free cash flow yield. The fund is non-diversified and normally invests at least 90% of its net assets in securities included in this index.',
'The KraneShares Global Carbon Offset Strategy ETF (KSET) was the first US-listed ETF providing exposure to the global voluntary carbon market. It achieved this by investing primarily in liquid carbon offset credit futures, including CME-traded Global Emissions Offsets (GEOs) and Nature-Based Global Emission Offsets (N-GEOs), which are designed to help businesses meet greenhouse gas reduction goals. Tracking an index that weighted eligible futures based on liquidity, the fund sought exposure to the same carbon offset credit futures, typically those maturing within two years. The ETF was considered non-diversified and utilized a Cayman Island subsidiary. However, the fund was delisted, with its last day of trading on an exchange being March 14, 2024.',
"The VanEck Biotech ETF (BBH) seeks to replicate the performance of the MVIS® US Listed Biotech 25 Index, which provides exposure to approximately 25 of the largest or leading U.S.-listed companies in the biotechnology industry. The fund normally invests at least 80% of its assets in securities comprising this market-cap-weighted index. The underlying index includes common stocks and depositary receipts of firms involved in the research, development, production, marketing, and sale of drugs based on genetic analysis and diagnostic equipment. While focusing on U.S.-listed companies, it may include foreign firms listed domestically, and medium-capitalization companies can be included. Reflecting the index's concentration, the fund is non-diversified and may have a top-heavy portfolio. The index is reviewed semi-annually.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### stage1-triplet-dataset
* Dataset: [stage1-triplet-dataset](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset) at [a0fb998](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset/tree/a0fb998d4fb2fabe62e38a295f6bbf4a66b70b38)
* Size: 23,175 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 80 tokens</li><li>mean: 148.35 tokens</li><li>max: 211 tokens</li></ul> | <ul><li>min: 80 tokens</li><li>mean: 153.81 tokens</li><li>max: 238 tokens</li></ul> | <ul><li>min: 82 tokens</li><li>mean: 150.74 tokens</li><li>max: 208 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>The FlexShares ESG & Climate Investment Grade Corporate Core Index Fund (FEIG) is a passively managed ETF designed to provide broad-market, core exposure to USD-denominated investment-grade corporate bonds. It seeks to track the performance of the Northern Trust ESG & Climate Investment Grade U.S. Corporate Core IndexSM, which selects bonds from a universe of USD-denominated, investment-grade corporate debt with maturities of at least one year. The index employs an optimization process to increase the aggregate ESG score and reduce aggregate climate-related risk among constituent companies, involving ranking firms on material ESG metrics, governance, and carbon risks, while excluding controversial companies and international initiative violators. Weights are also optimized to minimize systematic risk, and the index is rebalanced monthly. Under normal circumstances, the fund invests at least 80% of its assets in the index's securities.</code> | <code>The Pacer Nasdaq-100 Top 50 Cash Cows Growth Leaders ETF (QQQG) seeks to track the Pacer Nasdaq 100 Top 50 Cash Cows Growth Leaders Index, which draws its universe from the Nasdaq-100 Index. Following a rules-based strategy, the fund screens these companies based on average projected free cash flows and earnings over the next two fiscal years, excluding financials, real estate, and those with negative projections. It then ranks identified stocks by their trailing twelve-month free cash flow margins and selects the top 50 names, weighted by price momentum. The portfolio is reconstituted and rebalanced quarterly. Aiming to identify quality growth leaders with strong cash flow generation, the fund seeks to invest at least 80% of assets in growth securities and is non-diversified.</code> |
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>The FlexShares ESG & Climate Investment Grade Corporate Core Index Fund (FEIG) is a passively managed ETF designed to provide broad-market, core exposure to USD-denominated investment-grade corporate bonds. It seeks to track the performance of the Northern Trust ESG & Climate Investment Grade U.S. Corporate Core IndexSM, which selects bonds from a universe of USD-denominated, investment-grade corporate debt with maturities of at least one year. The index employs an optimization process to increase the aggregate ESG score and reduce aggregate climate-related risk among constituent companies, involving ranking firms on material ESG metrics, governance, and carbon risks, while excluding controversial companies and international initiative violators. Weights are also optimized to minimize systematic risk, and the index is rebalanced monthly. Under normal circumstances, the fund invests at least 80% of its assets in the index's securities.</code> | <code>The Nuveen Global Net Zero Transition ETF (NTZG) was an actively managed fund that sought capital appreciation by investing in global equity securities. The fund focused on companies positioned to contribute to the transition to a net zero carbon economy through their current or planned efforts to reduce global greenhouse gas emissions. Utilizing bottom-up, fundamental analysis, NTZG invested in a range of companies, including climate leaders, firms with disruptive climate mitigation technologies, and high carbon emitters working towards real-world emissions decline. The fund aimed to align with the Paris Climate Agreement by seeking to lower portfolio carbon intensity annually towards a 2050 net zero goal and engaging with portfolio companies, while excluding companies involved in weapons and firearms and investing globally across market capitalizations with allocations to non-US and emerging markets. **Please note: The security has been delisted, and the last day of trading on an exc...</code> |
| <code>The Invesco Financial Preferred ETF (PGF) seeks to track the ICE Exchange-Listed Fixed Rate Financial Preferred Securities Index, primarily by investing at least 90% of its total assets in the securities comprising the index. The underlying index is market capitalization weighted and designed to track the performance of exchange-listed, fixed rate, U.S. dollar denominated preferred securities, including functionally equivalent instruments, issued by U.S. financial companies. PGF provides a concentrated portfolio exclusively focused on financial-sector preferred securities and is considered non-diversified, holding both investment- and non-investment-grade securities within this focus.</code> | <code>The FlexShares ESG & Climate Investment Grade Corporate Core Index Fund (FEIG) is a passively managed ETF designed to provide broad-market, core exposure to USD-denominated investment-grade corporate bonds. It seeks to track the performance of the Northern Trust ESG & Climate Investment Grade U.S. Corporate Core IndexSM, which selects bonds from a universe of USD-denominated, investment-grade corporate debt with maturities of at least one year. The index employs an optimization process to increase the aggregate ESG score and reduce aggregate climate-related risk among constituent companies, involving ranking firms on material ESG metrics, governance, and carbon risks, while excluding controversial companies and international initiative violators. Weights are also optimized to minimize systematic risk, and the index is rebalanced monthly. Under normal circumstances, the fund invests at least 80% of its assets in the index's securities.</code> | <code>The First Trust Expanded Technology ETF (XPND) is an actively managed fund seeking long-term capital appreciation by investing primarily in US stocks identified as "Expanded Technology Companies." Defined as companies whose operations are principally derived from or dependent upon technology, these include traditional information technology firms as well as tech-dependent companies in other sectors, such as communication services and consumer discretionary (like internet and direct marketing retail). The fund invests at least 80% of its net assets in common stocks of these companies. While concentrated in the information technology sector and considered non-diversified, XPND aims for expanded exposure through a portfolio of around 50 companies selected using a quantitative model based on factors like return on equity, momentum, and free cash flow growth. Portfolio weights are generally market-cap-based within set ranges, and the fund is reconstituted and rebalanced quarterly.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.COSINE",
"triplet_margin": 0.05
}
```
### Evaluation Dataset
#### stage1-triplet-dataset
* Dataset: [stage1-triplet-dataset](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset) at [a0fb998](https://huggingface.co/datasets/hobbang/stage1-triplet-dataset/tree/a0fb998d4fb2fabe62e38a295f6bbf4a66b70b38)
* Size: 3,010 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 84 tokens</li><li>mean: 152.57 tokens</li><li>max: 214 tokens</li></ul> | <ul><li>min: 70 tokens</li><li>mean: 154.43 tokens</li><li>max: 224 tokens</li></ul> | <ul><li>min: 70 tokens</li><li>mean: 150.04 tokens</li><li>max: 204 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>The U.S. Global Technology and Aerospace & Defense ETF is an actively managed ETF seeking capital appreciation by investing in equity securities of companies expected to benefit from national defense efforts. These efforts include technological innovations and the development of products and services related to aerospace, physical, and cybersecurity defense, often in preparation for or in response to domestic, regional, or global conflicts. The fund is non-diversified.</code> | <code>The BlackRock Future Climate and Sustainable Economy ETF (BECO) is an actively managed equity fund focused on the transition to a lower carbon economy and future climate themes. It seeks a relatively concentrated, non-diversified portfolio of globally-listed companies of any market capitalization, investing across multiple subthemes such as sustainable energy, resource efficiency, future transport, sustainable nutrition, and biodiversity. The fund utilizes proprietary environmental criteria, including carbon metrics, and aims to align with the Paris Climate Agreement goals for net-zero emissions by 2050, while excluding certain high-emission industries and companies violating the UN Global Compact. It also attempts to achieve a better aggregate environmental and ESG score than its benchmark, the MSCI ACWI Multiple Industries Select Index. Note that BECO is being delisted, with its last day of trading on an exchange scheduled for August 12, 2024.</code> |
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>The U.S. Global Technology and Aerospace & Defense ETF is an actively managed ETF seeking capital appreciation by investing in equity securities of companies expected to benefit from national defense efforts. These efforts include technological innovations and the development of products and services related to aerospace, physical, and cybersecurity defense, often in preparation for or in response to domestic, regional, or global conflicts. The fund is non-diversified.</code> | <code>The iShares Energy Storage & Materials ETF (IBAT) seeks to track the STOXX Global Energy Storage and Materials Index, which measures the performance of equity securities of global companies involved in energy storage solutions, including hydrogen, fuel cells, and batteries, aiming to support the transition to a low carbon economy. Determined by STOXX Ltd., the index selects companies based on their exposure to the theme through revenue analysis and patent assessment, while also applying exclusionary ESG screens. The index is price-weighted, based on market capitalization with capping rules. The fund generally invests at least 90% of its assets in the component securities of its underlying index or substantially identical investments and is considered non-diversified.</code> |
| <code>The Global X S&P 500 Risk Managed Income ETF seeks to track the Cboe S&P 500 Risk Managed Income Index by investing at least 80% of its assets in index securities. The index's strategy involves holding the underlying stocks of the S&P 500 Index while applying an options collar, specifically selling at-the-money covered call options and buying monthly 5% out-of-the-money put options corresponding to the portfolio's value. This approach aims to generate income, ideally resulting in a net credit from the options premiums, and provide risk management, though selling at-the-money calls inherently caps the fund's potential for upside participation.</code> | <code>The U.S. Global Technology and Aerospace & Defense ETF is an actively managed ETF seeking capital appreciation by investing in equity securities of companies expected to benefit from national defense efforts. These efforts include technological innovations and the development of products and services related to aerospace, physical, and cybersecurity defense, often in preparation for or in response to domestic, regional, or global conflicts. The fund is non-diversified.</code> | <code>The Sprott Gold Miners ETF (SGDM) seeks to track the performance of the Solactive Gold Miners Custom Factors Total Return Index. This index focuses on gold mining companies based in the U.S. and Canada whose shares trade on the Toronto Stock Exchange, New York Stock Exchange, or NASDAQ. The index employs a weighting methodology that begins with market capitalization and then adjusts based on three fundamental factors: higher revenue growth, lower debt-to-equity, and higher free cash flow yield. The fund is non-diversified and normally invests at least 90% of its net assets in securities included in this index.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.COSINE",
"triplet_margin": 0.05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `bf16`: True
- `dataloader_drop_last`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:-------:|:-------------:|:---------------:|
| 0.0069 | 10 | 0.0448 | - |
| 0.0138 | 20 | 0.0354 | - |
| 0.0207 | 30 | 0.0293 | - |
| 0.0276 | 40 | 0.0381 | - |
| 0.0345 | 50 | 0.0228 | - |
| 0.0414 | 60 | 0.0238 | - |
| 0.0483 | 70 | 0.0229 | - |
| 0.0552 | 80 | 0.0148 | - |
| 0.0622 | 90 | 0.0175 | - |
| 0.0691 | 100 | 0.0161 | - |
| 0.0760 | 110 | 0.0124 | - |
| 0.0829 | 120 | 0.0111 | - |
| 0.0898 | 130 | 0.0165 | - |
| 0.0967 | 140 | 0.0162 | - |
| 0.1036 | 150 | 0.0141 | - |
| 0.1105 | 160 | 0.0116 | - |
| 0.1174 | 170 | 0.01 | - |
| 0.1243 | 180 | 0.0134 | - |
| 0.1312 | 190 | 0.0117 | - |
| 0.1381 | 200 | 0.0127 | 0.0131 |
| 0.1450 | 210 | 0.0083 | - |
| 0.1519 | 220 | 0.0116 | - |
| 0.1588 | 230 | 0.0099 | - |
| 0.1657 | 240 | 0.0086 | - |
| 0.1727 | 250 | 0.0099 | - |
| 0.1796 | 260 | 0.0047 | - |
| 0.1865 | 270 | 0.0052 | - |
| 0.1934 | 280 | 0.0086 | - |
| 0.2003 | 290 | 0.0084 | - |
| 0.2072 | 300 | 0.0068 | - |
| 0.2141 | 310 | 0.005 | - |
| 0.2210 | 320 | 0.0077 | - |
| 0.2279 | 330 | 0.0044 | - |
| 0.2348 | 340 | 0.0039 | - |
| 0.2417 | 350 | 0.0058 | - |
| 0.2486 | 360 | 0.0045 | - |
| 0.2555 | 370 | 0.0045 | - |
| 0.2624 | 380 | 0.0064 | - |
| 0.2693 | 390 | 0.0037 | - |
| **0.2762** | **400** | **0.0083** | **0.013** |
| 0.2831 | 410 | 0.0057 | - |
| 0.2901 | 420 | 0.0043 | - |
| 0.2970 | 430 | 0.0028 | - |
| 0.3039 | 440 | 0.0036 | - |
| 0.3108 | 450 | 0.0031 | - |
| 0.3177 | 460 | 0.0072 | - |
| 0.3246 | 470 | 0.0025 | - |
| 0.3315 | 480 | 0.0041 | - |
| 0.3384 | 490 | 0.0049 | - |
| 0.3453 | 500 | 0.0035 | - |
| 0.3522 | 510 | 0.0023 | - |
| 0.3591 | 520 | 0.0043 | - |
| 0.3660 | 530 | 0.0032 | - |
| 0.3729 | 540 | 0.0031 | - |
| 0.3798 | 550 | 0.0039 | - |
| 0.3867 | 560 | 0.0042 | - |
| 0.3936 | 570 | 0.0055 | - |
| 0.4006 | 580 | 0.0041 | - |
| 0.4075 | 590 | 0.0026 | - |
| 0.4144 | 600 | 0.002 | 0.0133 |
| 0.4213 | 610 | 0.0027 | - |
| 0.4282 | 620 | 0.0032 | - |
| 0.4351 | 630 | 0.0025 | - |
| 0.4420 | 640 | 0.0042 | - |
| 0.4489 | 650 | 0.0046 | - |
| 0.4558 | 660 | 0.0011 | - |
| 0.4627 | 670 | 0.0004 | - |
| 0.4696 | 680 | 0.0019 | - |
| 0.4765 | 690 | 0.0034 | - |
| 0.4834 | 700 | 0.0032 | - |
| 0.4903 | 710 | 0.0029 | - |
| 0.4972 | 720 | 0.0038 | - |
| 0.5041 | 730 | 0.0021 | - |
| 0.5110 | 740 | 0.0008 | - |
| 0.5180 | 750 | 0.0015 | - |
| 0.5249 | 760 | 0.0018 | - |
| 0.5318 | 770 | 0.0022 | - |
| 0.5387 | 780 | 0.0006 | - |
| 0.5456 | 790 | 0.0022 | - |
| 0.5525 | 800 | 0.0006 | 0.0160 |
| 0.5594 | 810 | 0.0021 | - |
| 0.5663 | 820 | 0.0013 | - |
| 0.5732 | 830 | 0.0019 | - |
| 0.5801 | 840 | 0.0017 | - |
| 0.5870 | 850 | 0.0008 | - |
| 0.5939 | 860 | 0.0012 | - |
| 0.6008 | 870 | 0.0003 | - |
| 0.6077 | 880 | 0.0009 | - |
| 0.6146 | 890 | 0.001 | - |
| 0.6215 | 900 | 0.0011 | - |
| 0.6285 | 910 | 0.0019 | - |
| 0.6354 | 920 | 0.0009 | - |
| 0.6423 | 930 | 0.0003 | - |
| 0.6492 | 940 | 0.0001 | - |
| 0.6561 | 950 | 0.0019 | - |
| 0.6630 | 960 | 0.0006 | - |
| 0.6699 | 970 | 0.0003 | - |
| 0.6768 | 980 | 0.0005 | - |
| 0.6837 | 990 | 0.0025 | - |
| 0.6906 | 1000 | 0.001 | 0.0154 |
| 0.6975 | 1010 | 0.0009 | - |
| 0.7044 | 1020 | 0.0004 | - |
| 0.7113 | 1030 | 0.0008 | - |
| 0.7182 | 1040 | 0.001 | - |
| 0.7251 | 1050 | 0.0018 | - |
| 0.7320 | 1060 | 0.002 | - |
| 0.7390 | 1070 | 0.0 | - |
| 0.7459 | 1080 | 0.0 | - |
| 0.7528 | 1090 | 0.0003 | - |
| 0.7597 | 1100 | 0.0002 | - |
| 0.7666 | 1110 | 0.0004 | - |
| 0.7735 | 1120 | 0.0004 | - |
| 0.7804 | 1130 | 0.0001 | - |
| 0.7873 | 1140 | 0.0002 | - |
| 0.7942 | 1150 | 0.001 | - |
| 0.8011 | 1160 | 0.0003 | - |
| 0.8080 | 1170 | 0.0003 | - |
| 0.8149 | 1180 | 0.0002 | - |
| 0.8218 | 1190 | 0.0002 | - |
| 0.8287 | 1200 | 0.0 | 0.0179 |
| 0.8356 | 1210 | 0.0006 | - |
| 0.8425 | 1220 | 0.0005 | - |
| 0.8494 | 1230 | 0.0015 | - |
| 0.8564 | 1240 | 0.0009 | - |
| 0.8633 | 1250 | 0.0007 | - |
| 0.8702 | 1260 | 0.0003 | - |
| 0.8771 | 1270 | 0.0003 | - |
| 0.8840 | 1280 | 0.0 | - |
| 0.8909 | 1290 | 0.0 | - |
| 0.8978 | 1300 | 0.0009 | - |
| 0.9047 | 1310 | 0.0011 | - |
| 0.9116 | 1320 | 0.0003 | - |
| 0.9185 | 1330 | 0.0 | - |
| 0.9254 | 1340 | 0.0002 | - |
| 0.9323 | 1350 | 0.0004 | - |
| 0.9392 | 1360 | 0.0004 | - |
| 0.9461 | 1370 | 0.0007 | - |
| 0.9530 | 1380 | 0.0006 | - |
| 0.9599 | 1390 | 0.0006 | - |
| 0.9669 | 1400 | 0.0005 | 0.0167 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.1.0+cu118
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
3mily1u/fim-codegen-350m-mono-finetuned-control-25 | 3mily1u | 2025-04-25T03:04:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T03:03:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
v0dro/distilgpt2-finetuned-wikitext2 | v0dro | 2025-04-25T03:02:29Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-18T06:15:29Z | ---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7205 | 1.0 | 2334 | 3.6655 |
| 3.6191 | 2.0 | 4668 | 3.6464 |
| 3.5583 | 3.0 | 7002 | 3.6432 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.7.0.dev20250304+cu126
- Datasets 3.4.1
- Tokenizers 0.21.0
|
zhing23face/crypto | zhing23face | 2025-04-25T03:01:59Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"text-to-speech",
"av",
"dataset:nvidia/OpenCodeReasoning",
"base_model:Qwen/Qwen2.5-Omni-7B",
"base_model:adapter:Qwen/Qwen2.5-Omni-7B",
"license:mit",
"region:us"
]
| text-to-speech | 2025-04-25T03:00:00Z | ---
license: mit
datasets:
- nvidia/OpenCodeReasoning
language:
- av
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-Omni-7B
new_version: Qwen/Qwen2.5-Omni-7B
pipeline_tag: text-to-speech
library_name: adapter-transformers
--- |
yeehinc19/DeepSeek-R1-Distill-Llama-8B-sql-base | yeehinc19 | 2025-04-25T03:01:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
]
| null | 2025-04-25T02:59:51Z | ---
base_model: DeepSeek-R1-Distill-Llama-8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
MayBashendy/arabic_SDP_all_binary_multilingual_e5_small_lr3e-05_targ3_dev1235678_epoch530 | MayBashendy | 2025-04-25T03:00:42Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2025-04-25T03:00:16Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
hackelle/mobilenetv4_hybrid_medium-s1-v0.2.0 | hackelle | 2025-04-25T02:58:45Z | 0 | 0 | configilm | [
"configilm",
"safetensors",
"mobilenetv4_hybrid_medium",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
]
| image-classification | 2025-04-25T02:58:39Z | ---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- mobilenetv4_hybrid_medium
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 0.000000
- label: Beaches, dunes, sands
score: 0.000000
- label: Broad-leaved forest
score: 0.000000
- label: Coastal wetlands
score: 0.000000
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Mobilenetv4_hybrid_medium pretrained on BigEarthNet v2.0 using Sentinel-1 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-1 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 1000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 33 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.610632 | 0.804239 |
| F1 Score | 0.556231 | 0.703160 |
| Precision | 0.633313 | 0.765481 |
# Example
| A Sentinel-1 image (VV, VH and VV/VH bands are used for visualization) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000000 <br> 0.000000 <br> ... <br> 0.000000 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/mobilenetv4_hybrid_medium-s1-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
hackelle/mobilenetv4_hybrid_medium-all-v0.2.0 | hackelle | 2025-04-25T02:56:35Z | 0 | 0 | configilm | [
"configilm",
"safetensors",
"mobilenetv4_hybrid_medium",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
]
| image-classification | 2025-04-25T02:56:30Z | ---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- mobilenetv4_hybrid_medium
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 0.000000
- label: Beaches, dunes, sands
score: 0.000000
- label: Broad-leaved forest
score: 0.000000
- label: Coastal wetlands
score: 0.000000
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Mobilenetv4_hybrid_medium pretrained on BigEarthNet v2.0 using Sentinel-1 & Sentinel-2 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-1 & Sentinel-2 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 1000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 35 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.688707 | 0.855062 |
| F1 Score | 0.637359 | 0.758341 |
| Precision | 0.706724 | 0.795412 |
# Example
| A Sentinel-2 image (true color representation) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000000 <br> 0.000000 <br> ... <br> 1.000000 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/mobilenetv4_hybrid_medium-all-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
jpark677/qwen2-vl-7b-instruct-mixture-fft-unfreeze-mlp-ep-3-waa-f | jpark677 | 2025-04-25T02:54:28Z | 0 | 0 | null | [
"safetensors",
"qwen2_vl",
"region:us"
]
| null | 2025-04-25T02:50:52Z | # qwen2-vl-7b-instruct-mixture-fft-unfreeze-mlp-ep-3-waa-f
This repository contains the model checkpoint (original iteration 4029) as epoch 3. |
mlfoundations-dev/b2_science_fasttext_neg_all_0.3k | mlfoundations-dev | 2025-04-25T02:54:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T02:15:24Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_fasttext_neg_all_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_fasttext_neg_all_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_neg_all_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
jacobmorrier/political-answer-quality-opposition | jacobmorrier | 2025-04-25T02:52:20Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"en",
"base_model:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"base_model:finetune:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-04-23T03:16:40Z | ---
base_model: sentence-transformers/multi-qa-mpnet-base-cos-v1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
license: mit
language:
- en
---
# Model Details
This Sentence-BERT model maps sentences and paragraphs to a 768-dimensional dense vector space. It was fine-tuned for semantic search using the `multi-qa-mpnet-base-cos-v1` model as a base on 2,917 question-answer pairs observed during the Question Period in the Canadian House of Commons from the 39<sup>th</sup> to the 43<sup>rd</sup> legislatures. *Exchanges prompted by questions from government backbenchers were not included in the training data.* The model can be used to evaluate the quality of responses in political Q&A sessions, including parliamentary questions.
- Developed by: [R. Michael Alvarez](https://www.rmichaelalvarez.com) and [Jacob Morrier](https://www.jacobmorrier.com)
- Model Type: Sentence-BERT
- Language: English
- License: MIT
- Fine-tuned from: [`multi-qa-mpnet-base-cos-v1`](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1)
# Uses
The model identifies the most relevant answer to a question and evaluates the quality of responses in political Q&A sessions.
# Bias, Risks, and Limitations
Our article discusses the model’s biases, risks, and limitations, along with its application in evaluating the quality of responses in political Q&A settings. In particular, we emphasize the need for caution when applying the model outside the original context of the Question Period, due to potential domain drift.
# How to Get Start with the Model
Inference with this model is straightforward using the `sentence-transformers` library. You can use the following code to compute the cosine similarity between questions and answers:
```
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('jacobmorrier/political-answer-quality-opposition')
questions_emb = model.encode(questions)
answers_emb = model.encode(answers)
cos_sim = util.cos_sim(questions_emb, answers_emb).cpu()
```
# Training Details
## Training Data
The training data consists of 2,917 question-answer pairs from the Question Period in the Canadian House of Commons collected between the 39<sup>th</sup> and 43<sup>rd</sup> legislatures, spanning fifteen years from the January 23, 2006, election to the September 20, 2021, election. *Exchanges prompted by questions from government backbenchers were not included in the training data.*
## Training Hyperparameters
| **Parameter** | **Value** |
|----------------------------|-----------------------------------------------------|
| **Loss Function** | Multiple Negatives Ranking Loss (with questions as anchors) |
| **Epochs** | 10 |
| **Batch Size** | 8 |
| **Optimizer** | AdamW |
| **Learning Rate** | 2e-5 |
| **Learning Rate Scheduler**| Warm-up Linear |
| **Warm-up Steps** | 10,000 |
| **Weight Decay** | 0.01 |
| **Maximum Gradient Norm** | 1 |
# Citation
Alvarez, R. Michael and Jacob Morrier (2025). *Measuring the Quality of Answers in Political Q&As with Large Language Models*. [https://doi.org/10.48550/arXiv.2404.08816](https://doi.org/10.48550/arXiv.2404.08816) |
jacobmorrier/political-answer-quality-reverse | jacobmorrier | 2025-04-25T02:51:58Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"en",
"base_model:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"base_model:finetune:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-04-23T03:05:14Z | ---
base_model: sentence-transformers/multi-qa-mpnet-base-cos-v1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
license: mit
language:
- en
---
# Model Details
This Sentence-BERT model maps sentences and paragraphs to a 768-dimensional dense vector space. It was fine-tuned for semantic search using the `multi-qa-mpnet-base-cos-v1` model as a base on 2,917 question-answer pairs observed during the Question Period in the Canadian House of Commons from the 39<sup>th</sup> to the 43<sup>rd</sup> legislatures. The model can be used to evaluate the quality of responses in political Q&A sessions, including parliamentary questions.
- Developed by: [R. Michael Alvarez](https://www.rmichaelalvarez.com) and [Jacob Morrier](https://www.jacobmorrier.com)
- Model Type: Sentence-BERT
- Language: English
- License: MIT
- Fine-tuned from: [`multi-qa-mpnet-base-cos-v1`](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1)
# Uses
The model identifies the most relevant answer to a question and evaluates the quality of responses in political Q&A sessions.
# Bias, Risks, and Limitations
Our article discusses the model’s biases, risks, and limitations, along with its application in evaluating the quality of responses in political Q&A settings. In particular, we emphasize the need for caution when applying the model outside the original context of the Question Period, due to potential domain drift.
# How to Get Start with the Model
Inference with this model is straightforward using the `sentence-transformers` library. You can use the following code to compute the cosine similarity between questions and answers:
```
from sentence_transformers import SentenceTransformer, util
model = SentenceTransformer('jacobmorrier/political-answer-quality-reverse')
questions_emb = model.encode(questions)
answers_emb = model.encode(answers)
cos_sim = util.cos_sim(questions_emb, answers_emb).cpu()
```
# Training Details
## Training Data
The training data consists of 2,917 question-answer pairs from the Question Period in the Canadian House of Commons collected between the 39<sup>th</sup> and 43<sup>rd</sup> legislatures, spanning fifteen years from the January 23, 2006, election to the September 20, 2021, election.
## Training Hyperparameters
| **Parameter** | **Value** |
|----------------------------|-----------------------------------------------------|
| **Loss Function** | Multiple Negatives Ranking Loss *(with answers as anchors)* |
| **Epochs** | 10 |
| **Batch Size** | 8 |
| **Optimizer** | AdamW |
| **Learning Rate** | 2e-5 |
| **Learning Rate Scheduler**| Warm-up Linear |
| **Warm-up Steps** | 10,000 |
| **Weight Decay** | 0.01 |
| **Maximum Gradient Norm** | 1 |
# Citation
Alvarez, R. Michael and Jacob Morrier (2025). *Measuring the Quality of Answers in Political Q&As with Large Language Models*. [https://doi.org/10.48550/arXiv.2404.08816](https://doi.org/10.48550/arXiv.2404.08816) |
wassermanrjoshua/totalrecon-flan-t5 | wassermanrjoshua | 2025-04-25T02:48:17Z | 0 | 0 | null | [
"safetensors",
"t5",
"recon",
"summarization",
"cybersecurity",
"flan-t5",
"text2text-generation",
"en",
"dataset:custom",
"license:mit",
"region:us"
]
| text2text-generation | 2025-04-25T01:49:27Z | ---
language: en
license: mit
tags:
- recon
- summarization
- cybersecurity
- flan-t5
datasets:
- custom
pipeline_tag: text2text-generation
---
# Recon AI Summarizer - `totalrecon`
This model is a fine-tuned version of `google/flan-t5-small` on a **synthetic reconnaissance dataset** created to simulate real-world passive recon scenarios.
It is designed to summarize recon output such as:
- Subdomains
- S3 buckets
- Email leaks
- Internal infrastructure mentions
It powers the [`totalrecon`](https://github.com/josh1643/totalrecon) Python library used in red teaming, CTFs, and passive intelligence gathering.
---
## Intended Use
**Input**: One line of recon output (e.g., from PDFs, text scans, etc.)
**Output**: Concise, human-readable summary of what was found
Example:
```text
Input: Found S3 bucket at s3://confidential-files.internal.recon
Output: S3 bucket contains sensitive internal recon files.
|
jpark677/qwen2-vl-7b-instruct-mixture-fft-unfreeze-mlp-ep-1-waa-f | jpark677 | 2025-04-25T02:47:17Z | 0 | 0 | null | [
"safetensors",
"qwen2_vl",
"region:us"
]
| null | 2025-04-25T02:43:39Z | # qwen2-vl-7b-instruct-mixture-fft-unfreeze-mlp-ep-1-waa-f
This repository contains the model checkpoint (original iteration 1343) as epoch 1. |
dkhanh/SmolVLM-500M-Instruct-earth | dkhanh | 2025-04-25T02:45:32Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-500M-Instruct",
"base_model:adapter:HuggingFaceTB/SmolVLM-500M-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-04-25T02:45:25Z | ---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-500M-Instruct
tags:
- generated_from_trainer
model-index:
- name: SmolVLM-500M-Instruct-earth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM-500M-Instruct-earth
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-500M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-500M-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
fairfaxapril/fairfaxapril | fairfaxapril | 2025-04-25T02:44:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-04-25T02:44:14Z | ---
license: creativeml-openrail-m
---
|
mlfoundations-dev/b2_science_fasttext_neg_wikipedia_0.3k | mlfoundations-dev | 2025-04-25T02:43:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T02:06:24Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: b2_science_fasttext_neg_wikipedia_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b2_science_fasttext_neg_wikipedia_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_neg_wikipedia_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
hubertau/rl_course_vizdoom_health_gathering_supreme | hubertau | 2025-04-25T02:40:25Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-04-25T02:40:14Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.44 +/- 2.72
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r hubertau/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.11.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.11.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Neooooo/StructAlign | Neooooo | 2025-04-25T02:39:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"DPO",
"sft",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Neooooo/SemAFacet-SFT-Merged-10k",
"base_model:finetune:Neooooo/SemAFacet-SFT-Merged-10k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T01:06:27Z | ---
base_model: Neooooo/SemAFacet-SFT-Merged-10k
library_name: transformers
model_name: StructAlign
tags:
- generated_from_trainer
- DPO
- sft
- trl
- dpo
licence: license
---
# Model Card for StructAlign
This model is a fine-tuned version of [Neooooo/SemAFacet-SFT-Merged-10k](https://huggingface.co/Neooooo/SemAFacet-SFT-Merged-10k).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Neooooo/StructAlign", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/xyh9896-georgia-institute-of-technology/huggingface/runs/3olfay3q)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
annemiekebickleyoy/37d56afa-2c74-48da-bb08-c150ae938216 | annemiekebickleyoy | 2025-04-25T02:38:39Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
]
| null | 2025-04-25T02:38:16Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2.5-7B-Instruct
model-index:
- name: annemiekebickleyoy/37d56afa-2c74-48da-bb08-c150ae938216
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# annemiekebickleyoy/37d56afa-2c74-48da-bb08-c150ae938216
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Skywork/SkyReels-V2-DF-14B-720P | Skywork | 2025-04-25T02:35:40Z | 0 | 3 | diffusers | [
"diffusers",
"safetensors",
"t2v",
"text-to-video",
"arxiv:2504.13074",
"arxiv:2407.01392",
"license:other",
"region:us"
]
| text-to-video | 2025-04-18T13:34:52Z | ---
license: other
license_name: skywork-license
license_link: LICENSE
pipeline_tag: text-to-video
---
<p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V2: Infinite-Length Film Generative Model</h1>
<p align="center">
📑 <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a> · 👋 <a href="https://www.skyreels.ai/home?utm_campaign=huggingface_skyreels_v2" target="_blank">Playground</a> · 💬 <a href="https://discord.gg/PwM6NYtccQ" target="_blank">Discord</a> · 🤗 <a href="https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9" target="_blank">Hugging Face</a> · 🤖 <a href="https://www.modelscope.cn/collections/SkyReels-V2-f665650130b144" target="_blank">ModelScope</a> · 🌐 <a href="https://github.com/SkyworkAI/SkyReels-V2" target="_blank">GitHub</a>
</p>
---
Welcome to the **SkyReels V2** repository! Here, you'll find the model weights for our infinite-length film generative models. To the best of our knowledge, it represents the first open-source video generative model employing **AutoRegressive Diffusion-Forcing architecture** that achieves the **SOTA performance** among publicly available models.
## 🔥🔥🔥 News!!
* Apr 24, 2025: 🔥 We release the 720P models, [SkyReels-V2-DF-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P) and [SkyReels-V2-I2V-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P). The former facilitates infinite-length autoregressive video generation, and the latter focuses on Image2Video synthesis.
* Apr 21, 2025: 👋 We release the inference code and model weights of [SkyReels-V2](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) Series Models and the video captioning model [SkyCaptioner-V1](https://huggingface.co/Skywork/SkyCaptioner-V1) .
* Apr 3, 2025: 🔥 We also release [SkyReels-A2](https://github.com/SkyworkAI/SkyReels-A2). This is an open-sourced controllable video generation framework capable of assembling arbitrary visual elements.
* Feb 18, 2025: 🔥 we released [SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1). This is an open-sourced and effective framework for portrait image animation.
* Feb 18, 2025: 🔥 We released [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1). This is the first and most advanced open-source human-centric video foundation model.
## 🎥 Demos
<table>
<tr>
<td align="center">
<video src="https://github.com/user-attachments/assets/f6f9f9a7-5d5f-433c-9d73-d8d593b7ad25" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/0eb13415-f4d9-4aaf-bcd3-3031851109b9" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/dcd16603-5bf4-4786-8e4d-1ed23889d07a" width="100%"></video>
</td>
</tr>
</table>
The demos above showcase 30-second videos generated using our SkyReels-V2 Diffusion Forcing model.
## 📑 TODO List
- [x] <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a>
- [x] Checkpoints of the 14B and 1.3B Models Series
- [x] Single-GPU & Multi-GPU Inference Code
- [x] <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a>: A Video Captioning Model
- [x] Prompt Enhancer
- [ ] Diffusers integration
- [ ] Checkpoints of the 5B Models Series
- [ ] Checkpoints of the Camera Director Models
- [ ] Checkpoints of the Step & Guidance Distill Model
## 🚀 Quickstart
#### Installation
```shell
# clone the repository.
git clone https://github.com/SkyworkAI/SkyReels-V2
cd SkyReels-V2
# Install dependencies. Test environment uses Python 3.10.12.
pip install -r requirements.txt
```
#### Model Download
You can download our models from Hugging Face:
<table>
<thead>
<tr>
<th>Type</th>
<th>Model Variant</th>
<th>Recommended Height/Width/Frame</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="5">Diffusion Forcing</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Text-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Image-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="3">Camera Director</td>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
</tbody>
</table>
After downloading, set the model path in your generation commands:
#### Single GPU Inference
- **Diffusion Forcing for Long Video Generation**
The <a href="https://arxiv.org/abs/2407.01392">**Diffusion Forcing**</a> version model allows us to generate Infinite-Length videos. This model supports both **text-to-video (T2V)** and **image-to-video (I2V)** tasks, and it can perform inference in both synchronous and asynchronous modes. Here we demonstrate 2 running scripts as examples for long video generation. If you want to adjust the inference parameters, e.g., the duration of video, inference mode, read the Note below first.
synchronous generation for 10s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# synchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
asynchronous generation for 30s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# asynchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 5 \
--causal_block_size 5 \
--base_num_frames 97 \
--num_frames 737 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload
```
> **Note**:
> - If you want to run the **image-to-video (I2V)** task, add `--image ${image_path}` to your command and it is also better to use **text-to-video (T2V)**-like prompt which includes some descriptions of the first-frame image.
> - For long video generation, you can just switch the `--num_frames`, e.g., `--num_frames 257` for 10s video, `--num_frames 377` for 15s video, `--num_frames 737` for 30s video, `--num_frames 1457` for 60s video. The number is not strictly aligned with the logical frame number for specified time duration, but it is aligned with some training parameters, which means it may perform better. When you use asynchronous inference with causal_block_size > 1, the `--num_frames` should be carefully set.
> - You can use `--ar_step 5` to enable asynchronous inference. When asynchronous inference, `--causal_block_size 5` is recommended while it is not supposed to be set for synchronous generation. REMEMBER that the frame latent number inputted into the model in every iteration, e.g., base frame latent number (e.g., (97-1)//4+1=25 for base_num_frames=97) and (e.g., (237-97-(97-17)x1+17-1)//4+1=20 for base_num_frames=97, num_frames=237, overlap_history=17) for the last iteration, MUST be divided by causal_block_size. If you find it too hard to calculate and set proper values, just use our recommended setting above :). Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.
> - To reduce peak VRAM, just lower the `--base_num_frames`, e.g., to 77 or 57, while keeping the same generative length `--num_frames` you want to generate. This may slightly reduce video quality, and it should not be set too small.
> - `--addnoise_condition` is used to help smooth the long video generation by adding some noise to the clean condition. Too large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger ones, but it is recommended to not exceed 50.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM.
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
python3 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 43.4GB peak VRAM.
- **Prompt Enhancer**
The prompt enhancer is implemented based on <a href="https://huggingface.co/Qwen/Qwen2.5-32B-Instruct">Qwen2.5-32B-Instruct</a> and is utilized via the `--prompt_enhancer` parameter. It works ideally for short prompts, while for long prompts, it might generate an excessively lengthy prompt that could lead to over-saturation in the generative video. Note the peak memory of GPU is 64G+ if you use `--prompt_enhancer`. If you want to obtain the enhanced prompt separately, you can also run the prompt_enhancer script separately for testing. The steps are as follows:
```shell
cd skyreels_v2_infer/pipelines
python3 prompt_enhancer.py --prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface."
```
> **Note**:
> - `--prompt_enhancer` is not allowed if using `--use_usp`. We recommend running the skyreels_v2_infer/pipelines/prompt_enhancer.py script first to generate enhanced prompt before enabling the `--use_usp` parameter.
**Advanced Configuration Options**
Below are the key parameters you can customize for video generation:
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --prompt | | Text description for generating your video |
| --image | | Path to input image for image-to-video generation |
| --resolution | 540P or 720P | Output video resolution (select based on model type) |
| --num_frames | 97 or 121 | Total frames to generate (**97 for 540P models**, **121 for 720P models**) |
| --inference_steps | 50 | Number of denoising steps |
| --fps | 24 | Frames per second in the output video |
| --shift | 8.0 or 5.0 | Flow matching scheduler parameter (**8.0 for T2V**, **5.0 for I2V**) |
| --guidance_scale | 6.0 or 5.0 | Controls text adherence strength (**6.0 for T2V**, **5.0 for I2V**) |
| --seed | | Fixed seed for reproducible results (omit for random generation) |
| --offload | True | Offloads model components to CPU to reduce VRAM usage (recommended) |
| --use_usp | True | Enables multi-GPU acceleration with xDiT USP |
| --outdir | ./video_out | Directory where generated videos will be saved |
| --prompt_enhancer | True | Expand the prompt into a more detailed description |
| --teacache | False | Enables teacache for faster inference |
| --teacache_thresh | 0.2 | Higher speedup will cause to worse quality |
| --use_ret_steps | False | Retention Steps for teacache |
**Diffusion Forcing Additional Parameters**
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --ar_step | 0 | Controls asynchronous inference (0 for synchronous mode) |
| --base_num_frames | 97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**) |
| --overlap_history | 17 | Number of frames to overlap for smooth transitions in long videos |
| --addnoise_condition | 20 | Improves consistency in long video generation |
| --causal_block_size | 5 | Recommended when using asynchronous inference (--ar_step > 0) |
#### Multi-GPU inference using xDiT USP
We use [xDiT](https://github.com/xdit-project/xDiT) USP to accelerate inference. For example, to generate a video with 2 GPUs, you can use the following command:
- **Diffusion Forcing**
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# diffusion forcing synchronous inference
torchrun --nproc_per_node=2 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--use_usp \
--offload \
--seed 42
```
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
torchrun --nproc_per_node=2 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--offload \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--use_usp \
--seed 42
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
## Contents
- [Abstract](#abstract)
- [Methodology of SkyReels-V2](#methodology-of-skyreels-v2)
- [Key Contributions of SkyReels-V2](#key-contributions-of-skyreels-v2)
- [Video Captioner](#video-captioner)
- [Reinforcement Learning](#reinforcement-learning)
- [Diffusion Forcing](#diffusion-forcing)
- [High-Quality Supervised Fine-Tuning(SFT)](#high-quality-supervised-fine-tuning-sft)
- [Performance](#performance)
- [Acknowledgements](#acknowledgements)
- [Citation](#citation)
---
## Abstract
Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation.
To address these limitations, we introduce SkyReels-V2, the world's first infinite-length film generative model using a Diffusion Forcing framework. Our approach synergizes Multi-modal Large Language Models (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing techniques to achieve comprehensive optimization. Beyond its technical innovations, SkyReels-V2 enables multiple practical applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and multi-subject consistent video generation through our <a href="https://github.com/SkyworkAI/SkyReels-A2">Skyreels-A2</a> system.
## Methodology of SkyReels-V2
The SkyReels-V2 methodology consists of several interconnected components. It starts with a comprehensive data processing pipeline that prepares various quality training data. At its core is the Video Captioner architecture, which provides detailed annotations for video content. The system employs a multi-task pretraining strategy to build fundamental video generation capabilities. Post-training optimization includes Reinforcement Learning to enhance motion quality, Diffusion Forcing Training for generating extended videos, and High-quality Supervised Fine-Tuning (SFT) stages for visual refinement. The model runs on optimized computational infrastructure for efficient training and inference. SkyReels-V2 supports multiple applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and Elements-to-Video Generation.
<p align="center">
<img src="assets/main_pipeline.jpg" alt="mainpipeline" width="100%">
</p>
## Key Contributions of SkyReels-V2
#### Video Captioner
<a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> serves as our video captioning model for data annotation. This model is trained on the captioning result from the base model <a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Instruct</a> and the sub-expert captioners on a balanced video data. The balanced video data is a carefully curated dataset of approximately 2 million videos to ensure conceptual balance and annotation quality. Built upon the <a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Instruct</a> foundation model, <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> is fine-tuned to enhance performance in domain-specific video captioning tasks. To compare the performance with the SOTA models, we conducted a manual assessment of accuracy across different captioning fields using a test set of 1,000 samples. The proposed <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> achieves the highest average accuracy among the baseline models, and show a dramatic result in the shot related fields
<p align="center">
<table align="center">
<thead>
<tr>
<th>model</th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Ins.</a></th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Ins.</a></th>
<th><a href="https://huggingface.co/omni-research/Tarsier2-Recap-7b">Tarsier2-Recap-7b</a></th>
<th><a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Avg accuracy</td>
<td>51.4%</td>
<td>58.7%</td>
<td>49.4%</td>
<td><strong>76.3%</strong></td>
</tr>
<tr>
<td>shot type</td>
<td>76.8%</td>
<td>82.5%</td>
<td>60.2%</td>
<td><strong>93.7%</strong></td>
</tr>
<tr>
<td>shot angle</td>
<td>60.0%</td>
<td>73.7%</td>
<td>52.4%</td>
<td><strong>89.8%</strong></td>
</tr>
<tr>
<td>shot position</td>
<td>28.4%</td>
<td>32.7%</td>
<td>23.6%</td>
<td><strong>83.1%</strong></td>
</tr>
<tr>
<td>camera motion</td>
<td>62.0%</td>
<td>61.2%</td>
<td>45.3%</td>
<td><strong>85.3%</strong></td>
</tr>
<tr>
<td>expression</td>
<td>43.6%</td>
<td>51.5%</td>
<td>54.3%</td>
<td><strong>68.8%</strong></td>
</tr>
<tr>
<td colspan="5" style="text-align: center; border-bottom: 1px solid #ddd; padding: 8px;"></td>
</tr>
<tr>
<td>TYPES_type</td>
<td>43.5%</td>
<td>49.7%</td>
<td>47.6%</td>
<td><strong>82.5%</strong></td>
</tr>
<tr>
<td>TYPES_sub_type</td>
<td>38.9%</td>
<td>44.9%</td>
<td>45.9%</td>
<td><strong>75.4%</strong></td>
</tr>
<tr>
<td>appearance</td>
<td>40.9%</td>
<td>52.0%</td>
<td>45.6%</td>
<td><strong>59.3%</strong></td>
</tr>
<tr>
<td>action</td>
<td>32.4%</td>
<td>52.0%</td>
<td><strong>69.8%</strong></td>
<td>68.8%</td>
</tr>
<tr>
<td>position</td>
<td>35.4%</td>
<td>48.6%</td>
<td>45.5%</td>
<td><strong>57.5%</strong></td>
</tr>
<tr>
<td>is_main_subject</td>
<td>58.5%</td>
<td>68.7%</td>
<td>69.7%</td>
<td><strong>80.9%</strong></td>
</tr>
<tr>
<td>environment</td>
<td>70.4%</td>
<td><strong>72.7%</strong></td>
<td>61.4%</td>
<td>70.5%</td>
</tr>
<tr>
<td>lighting</td>
<td>77.1%</td>
<td><strong>80.0%</strong></td>
<td>21.2%</td>
<td>76.5%</td>
</tr>
</tbody>
</table>
</p>
#### Reinforcement Learning
Inspired by the previous success in LLM, we propose to enhance the performance of the generative model by Reinforcement Learning. Specifically, we focus on the motion quality because we find that the main drawback of our generative model is:
- the generative model does not handle well with large, deformable motions.
- the generated videos may violate the physical law.
To avoid the degradation in other metrics, such as text alignment and video quality, we ensure the preference data pairs have comparable text alignment and video quality, while only the motion quality varies. This requirement poses greater challenges in obtaining preference annotations due to the inherently higher costs of human annotation. To address this challenge, we propose a semi-automatic pipeline that strategically combines automatically generated motion pairs and human annotation results. This hybrid approach not only enhances the data scale but also improves alignment with human preferences through curated quality control. Leveraging this enhanced dataset, we first train a specialized reward model to capture the generic motion quality differences between paired samples. This learned reward function subsequently guides the sample selection process for Direct Preference Optimization (DPO), enhancing the motion quality of the generative model.
#### Diffusion Forcing
We introduce the Diffusion Forcing Transformer to unlock our model’s ability to generate long videos. Diffusion Forcing is a training and sampling strategy where each token is assigned an independent noise level. This allows tokens to be denoised according to arbitrary, per-token schedules. Conceptually, this approach functions as a form of partial masking: a token with zero noise is fully unmasked, while complete noise fully masks it. Diffusion Forcing trains the model to "unmask" any combination of variably noised tokens, using the cleaner tokens as conditional information to guide the recovery of noisy ones. Building on this, our Diffusion Forcing Transformer can extend video generation indefinitely based on the last frames of the previous segment. Note that the synchronous full sequence diffusion is a special case of Diffusion Forcing, where all tokens share the same noise level. This relationship allows us to fine-tune the Diffusion Forcing Transformer from a full-sequence diffusion model.
#### High-Quality Supervised Fine-Tuning (SFT)
We implement two sequential high-quality supervised fine-tuning (SFT) stages at 540p and 720p resolutions respectively, with the initial SFT phase conducted immediately after pretraining but prior to reinforcement learning (RL) stage.This first-stage SFT serves as a conceptual equilibrium trainer, building upon the foundation model’s pretraining outcomes that utilized only fps24 video data, while strategically removing FPS embedding components to streamline thearchitecture. Trained with the high-quality concept-balanced samples, this phase establishes optimized initialization parameters for subsequent training processes. Following this, we execute a secondary high-resolution SFT at 720p after completing the diffusion forcing stage, incorporating identical loss formulations and the higher-quality concept-balanced datasets by the manually filter. This final refinement phase focuses on resolution increase such that the overall video quality will be further enhanced.
## Performance
To comprehensively evaluate our proposed method, we construct the SkyReels-Bench for human assessment and leveraged the open-source <a href="https://github.com/Vchitect/VBench">V-Bench</a> for automated evaluation. This allows us to compare our model with the state-of-the-art (SOTA) baselines, including both open-source and proprietary models.
#### Human Evaluation
For human evaluation, we design SkyReels-Bench with 1,020 text prompts, systematically assessing three dimensions: Instruction Adherence, Motion Quality, Consistency and Visual Quality. This benchmark is designed to evaluate both text-to-video (T2V) and image-to-video (I2V) generation models, providing comprehensive assessment across different generation paradigms. To ensure fairness, all models were evaluated under default settings with consistent resolutions, and no post-generation filtering was applied.
- Text To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model Name</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://runwayml.com/research/introducing-gen-3-alpha">Runway-Gen3 Alpha</a></td>
<td>2.53</td>
<td>2.19</td>
<td>2.57</td>
<td>3.23</td>
<td>2.11</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.82</td>
<td>2.64</td>
<td>2.81</td>
<td>3.20</td>
<td>2.61</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 STD Mode</a></td>
<td>2.99</td>
<td>2.77</td>
<td>3.05</td>
<td>3.39</td>
<td><strong>2.76</strong></td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.0</td>
<td>2.8</td>
<td>3.08</td>
<td>3.29</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>3.12</td>
<td>2.91</td>
<td>3.31</td>
<td><strong>3.54</strong></td>
<td>2.71</td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>3.14</strong></td>
<td><strong>3.15</strong></td>
<td><strong>3.35</strong></td>
<td>3.34</td>
<td>2.74</td>
</tr>
</tbody>
</table>
</p>
The evaluation demonstrates that our model achieves significant advancements in **instruction adherence (3.15)** compared to baseline methods, while maintaining competitive performance in **motion quality (2.74)** without sacrificing the **consistency (3.35)**.
- Image To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.84</td>
<td>2.97</td>
<td>2.95</td>
<td>2.87</td>
<td>2.56</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>2.85</td>
<td>3.10</td>
<td>2.81</td>
<td>3.00</td>
<td>2.48</td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.05</td>
<td>3.31</td>
<td>2.58</td>
<td>3.55</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 Pro Mode</a></td>
<td>3.4</td>
<td>3.56</td>
<td>3.03</td>
<td>3.58</td>
<td>3.41</td>
</tr>
<tr>
<td><a href="https://runwayml.com/research/introducing-runway-gen-4">Runway-Gen4</a></td>
<td>3.39</td>
<td>3.75</td>
<td>3.2</td>
<td>3.4</td>
<td>3.37</td>
</tr>
<tr>
<td>SkyReels-V2-DF</td>
<td>3.24</td>
<td>3.64</td>
<td>3.21</td>
<td>3.18</td>
<td>2.93</td>
</tr>
<tr>
<td>SkyReels-V2-I2V</td>
<td>3.29</td>
<td>3.42</td>
<td>3.18</td>
<td>3.56</td>
<td>3.01</td>
</tr>
</tbody>
</table>
</p>
Our results demonstrate that both **SkyReels-V2-I2V (3.29)** and **SkyReels-V2-DF (3.24)** achieve state-of-the-art performance among open-source models, significantly outperforming HunyuanVideo-13B (2.84) and Wan2.1-14B (2.85) across all quality dimensions. With an average score of 3.29, SkyReels-V2-I2V demonstrates comparable performance to proprietary models Kling-1.6 (3.4) and Runway-Gen4 (3.39).
#### VBench
To objectively compare SkyReels-V2 Model against other leading open-source Text-To-Video models, we conduct comprehensive evaluations using the public benchmark <a href="https://github.com/Vchitect/VBench">V-Bench</a>. Our evaluation specifically leverages the benchmark’s longer version prompt. For fair comparison with baseline models, we strictly follow their recommended setting for inference.
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Total Score</th>
<th>Quality Score</th>
<th>Semantic Score</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/hpcaitech/Open-Sora">OpenSora 2.0</a></td>
<td>81.5 %</td>
<td>82.1 %</td>
<td>78.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/THUDM/CogVideo">CogVideoX1.5-5B</a></td>
<td>80.3 %</td>
<td>80.9 %</td>
<td>77.9 %</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>82.7 %</td>
<td>84.4 %</td>
<td>76.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>83.7 %</td>
<td>84.2 %</td>
<td><strong>81.4 %</strong></td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>83.9 %</strong></td>
<td><strong>84.7 %</strong></td>
<td>80.8 %</td>
</tr>
</tbody>
</table>
</p>
The VBench results demonstrate that SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B, With the highest **total score (83.9%)** and **quality score (84.7%)**. In this evaluation, the semantic score is slightly lower than Wan2.1-14B, while we outperform Wan2.1-14B in human evaluations, with the primary gap attributed to V-Bench’s insufficient evaluation of shot-scenario semantic adherence.
## Acknowledgements
We would like to thank the contributors of <a href="https://github.com/Wan-Video/Wan2.1">Wan 2.1</a>, <a href="https://github.com/xdit-project/xDiT">XDit</a> and <a href="https://qwenlm.github.io/blog/qwen2.5/">Qwen 2.5</a> repositories, for their open research and contributions.
## Citation
```bibtex
@misc{chen2025skyreelsv2infinitelengthfilmgenerative,
title={SkyReels-V2: Infinite-length Film Generative Model},
author={Guibin Chen and Dixuan Lin and Jiangping Yang and Chunze Lin and Junchen Zhu and Mingyuan Fan and Hao Zhang and Sheng Chen and Zheng Chen and Chengcheng Ma and Weiming Xiong and Wei Wang and Nuo Pang and Kang Kang and Zhiheng Xu and Yuzhe Jin and Yupeng Liang and Yubing Song and Peng Zhao and Boyuan Xu and Di Qiu and Debang Li and Zhengcong Fei and Yang Li and Yahui Zhou},
year={2025},
eprint={2504.13074},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13074},
}
``` |
NexesMess/Llama_3.3_70b_DonkeyRider_v2 | NexesMess | 2025-04-25T02:31:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-04-25T01:48:36Z | ---
base_model:
- SicariusSicariiStuff/Negative_LLAMA_70B
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3) as a base.
### Models Merged
The following models were included in the merge:
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
models:
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 1.0
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
weight: 1.0
base_model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
filter_wise: false
smooth: false
allow_negative_weights: false
chat_template: auto
tokenizer:
source: union
```
|
AlexHung29629/mistral-small-reasoning-2 | AlexHung29629 | 2025-04-25T01:53:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral3",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-04-25T01:37:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Xenova/WizardCoder-1B-V1.0 | Xenova | 2025-04-25T01:37:15Z | 22 | 4 | transformers.js | [
"transformers.js",
"onnx",
"gpt_bigcode",
"text-generation",
"base_model:WizardLM/WizardCoder-1B-V1.0",
"base_model:quantized:WizardLM/WizardCoder-1B-V1.0",
"region:us"
]
| text-generation | 2023-09-01T19:43:54Z | ---
base_model: WizardLM/WizardCoder-1B-V1.0
library_name: transformers.js
---
https://huggingface.co/WizardLM/WizardCoder-1B-V1.0 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
PhoenixB/2bfcba9c-ed93-40b4-9990-1f923c46ff5e | PhoenixB | 2025-04-25T01:26:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:quantized:sethuiyer/Medichat-Llama3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-25T01:20:44Z | ---
base_model: sethuiyer/Medichat-Llama3-8B
library_name: transformers
model_name: 2bfcba9c-ed93-40b4-9990-1f923c46ff5e
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 2bfcba9c-ed93-40b4-9990-1f923c46ff5e
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="PhoenixB/2bfcba9c-ed93-40b4-9990-1f923c46ff5e", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients-On-Demand/runs/aoxgznb7)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
PhoenixB/c025fa62-b558-4548-a97e-6325a75fade3 | PhoenixB | 2025-04-25T00:53:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:quantized:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-25T00:49:23Z | ---
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
library_name: transformers
model_name: c025fa62-b558-4548-a97e-6325a75fade3
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for c025fa62-b558-4548-a97e-6325a75fade3
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="PhoenixB/c025fa62-b558-4548-a97e-6325a75fade3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients-On-Demand/runs/jbrtp75x)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
gordoabc/my-gemma-2-finetuned-model | gordoabc | 2025-04-25T00:39:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-04-25T00:35:27Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nairaxo/xlsr-afrikaans | nairaxo | 2025-04-25T00:37:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-04-24T18:16:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits