modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-24 00:43:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
573 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-24 00:37:34
card
stringlengths
11
1.01M
TimHo/SpaceInvadersNoFrameskip
TimHo
2025-09-18T22:17:04Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-18T22:16:32Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 641.00 +/- 266.56 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TimHo -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TimHo -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga TimHo ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
TAUR-dev/M-skillfactory-ablations__random_reflections5_formatsrandom-sft
TAUR-dev
2025-09-18T22:15:28Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-09-18T22:14:39Z
# M-skillfactory-ablations__random_reflections5_formatsrandom-sft This model was created as part of the **skillfactory-ablations__random_reflections5_formatsrandom** experiment using the SkillFactory experiment management system. ## Model Details - **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning) - **Stage Name**: sft - **Experiment**: skillfactory-ablations__random_reflections5_formatsrandom ## Training Configuration {"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/home/skeh/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__skillfactory_ablations__random_reflections5_formatsrandom", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/datasets/sedrick/skillfactory/temp/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__skillfactory-ablations__random_reflections5_formatsrandom__v1", "sf_eval_before_training": false, "sf_wandb_project": "skillfactory-ablations__random_reflections5_formatsrandom_sft", "sf_eval_steps": null, "run_name": "skillfactory-ablations__random_reflections5_formatsrandom_sft"} ## Experiment Tracking 🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__skillfactory-ablations__random_reflections5_formatsrandom__v1) ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-skillfactory-ablations__random_reflections5_formatsrandom-sft") model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-skillfactory-ablations__random_reflections5_formatsrandom-sft") ```
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758233318
schooncestiaa
2025-09-18T22:09:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T22:09:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jcarleton/llama2-13B-anthropic-sft
jcarleton
2025-09-18T22:06:29Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T22:03:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tamewild/8b_v4_merged_e3
tamewild
2025-09-18T22:05:34Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T22:03:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TAUR-dev/M-skillfactory-ablations__orig_only_reflections5_formats-C_full-sft
TAUR-dev
2025-09-18T22:04:34Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-09-18T22:03:48Z
# M-skillfactory-ablations__orig_only_reflections5_formats-C_full-sft This model was created as part of the **skillfactory-ablations__orig_only_reflections5_formats-C_full** experiment using the SkillFactory experiment management system. ## Model Details - **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning) - **Stage Name**: sft - **Experiment**: skillfactory-ablations__orig_only_reflections5_formats-C_full ## Training Configuration {"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/home/skeh/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__skillfactory_ablations__orig_only_reflections5_formats_C_full", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/datasets/sedrick/skillfactory/temp/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__skillfactory-ablations__orig_only_reflections5_formats-C_full__v1", "sf_eval_before_training": false, "sf_wandb_project": "skillfactory-ablations__orig_only_reflections5_formats-C_full_sft", "sf_eval_steps": null, "run_name": "skillfactory-ablations__orig_only_reflections5_formats-C_full_sft"} ## Experiment Tracking 🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__skillfactory-ablations__orig_only_reflections5_formats-C_full__v1) ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-skillfactory-ablations__orig_only_reflections5_formats-C_full-sft") model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-skillfactory-ablations__orig_only_reflections5_formats-C_full-sft") ```
tamewild/8b_v4_merged_e5
tamewild
2025-09-18T22:01:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:59:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
devparagiri/Test-20250918-215607
devparagiri
2025-09-18T22:01:06Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "dataset:devparagiri/dataset-Test-20250918-215607", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:58:56Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: meta-llama/Llama-3.2-1B-Instruct widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - devparagiri/dataset-Test-20250918-215607 --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
adamo1139/DeepSeek-V2.5-1210-AWQ
adamo1139
2025-09-18T22:00:42Z
7
0
null
[ "safetensors", "deepseek_v2", "custom_code", "base_model:deepseek-ai/DeepSeek-V2.5-1210", "base_model:quantized:deepseek-ai/DeepSeek-V2.5-1210", "4-bit", "awq", "region:us" ]
null
2025-05-30T20:09:57Z
--- base_model: - deepseek-ai/DeepSeek-V2.5-1210 --- AWQ quantization of DeepSeek-V2.5-1210 To run on 8xH100 80GB, you can use vLLM with: ``` vllm serve adamo1139/DeepSeek-V2.5-1210-AWQ --tensor-parallel 8 --trust-remote-code ```
s3y/pi0test1
s3y
2025-09-18T21:59:13Z
0
0
lerobot
[ "lerobot", "safetensors", "pi0", "robotics", "dataset:lerobot/aloha_sim_insertion_human", "arxiv:2410.24164", "license:apache-2.0", "region:us" ]
robotics
2025-09-18T21:35:39Z
--- datasets: lerobot/aloha_sim_insertion_human library_name: lerobot license: apache-2.0 model_name: pi0 pipeline_tag: robotics tags: - pi0 - lerobot - robotics --- # Model Card for pi0 <!-- Provide a quick summary of what the model is/does. --> [Pi0](https://huggingface.co/papers/2410.24164) is a generalist vision-language-action transformer that converts multimodal observations and text instructions into robot actions for zero-shot task transfer. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
samuelsimko/Meta-Llama-3-8B-Instruct-Triplet-Adv
samuelsimko
2025-09-18T21:58:11Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T00:34:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
heado/audio_kor
heado
2025-09-18T21:53:01Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:Kkonjeong/wav2vec2-base-korean", "base_model:finetune:Kkonjeong/wav2vec2-base-korean", "endpoints_compatible", "region:us" ]
audio-classification
2025-09-18T21:52:50Z
--- library_name: transformers base_model: Kkonjeong/wav2vec2-base-korean tags: - generated_from_trainer metrics: - accuracy model-index: - name: audio_kor results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # audio_kor This model is a fine-tuned version of [Kkonjeong/wav2vec2-base-korean](https://huggingface.co/Kkonjeong/wav2vec2-base-korean) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3679 - Accuracy: 0.9496 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6342 | 1.0 | 30 | 2.6301 | 0.0588 | | 2.4757 | 2.0 | 60 | 2.3899 | 0.3109 | | 1.9266 | 3.0 | 90 | 1.8527 | 0.6134 | | 1.5614 | 4.0 | 120 | 1.4405 | 0.7227 | | 0.9955 | 5.0 | 150 | 1.0447 | 0.8655 | | 0.6666 | 6.0 | 180 | 0.7428 | 0.9076 | | 0.4623 | 7.0 | 210 | 0.5859 | 0.9160 | | 0.334 | 8.0 | 240 | 0.4750 | 0.9244 | | 0.2673 | 9.0 | 270 | 0.3788 | 0.9496 | | 0.196 | 10.0 | 300 | 0.3679 | 0.9496 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
gumperto/Qwen2.5-32B-Instruct-emergent-finetune-haiku_samples-down-l32-r1
gumperto
2025-09-18T21:49:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "unsloth", "sft", "conversational", "base_model:unsloth/Qwen2.5-32B-Instruct", "base_model:finetune:unsloth/Qwen2.5-32B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:08:17Z
--- base_model: unsloth/Qwen2.5-32B-Instruct library_name: transformers model_name: Qwen2.5-32B-Instruct-emergent-finetune-haiku_samples-down-l32-r1 tags: - generated_from_trainer - trl - unsloth - sft licence: license --- # Model Card for Qwen2.5-32B-Instruct-emergent-finetune-haiku_samples-down-l32-r1 This model is a fine-tuned version of [unsloth/Qwen2.5-32B-Instruct](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="gumperto/Qwen2.5-32B-Instruct-emergent-finetune-haiku_samples-down-l32-r1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gumperto-waseda-university/clarifying-em/runs/2e1xp7je) This model was trained with SFT. ### Framework versions - TRL: 0.24.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 4.1.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758232087
schooncestiaa
2025-09-18T21:49:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T21:49:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
peter246810/my_awesome_food_model
peter246810
2025-09-18T21:49:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-09-18T21:34:02Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_food_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5956 - Accuracy: 0.891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.6827 | 1.0 | 63 | 2.5131 | 0.809 | | 1.831 | 2.0 | 126 | 1.7851 | 0.86 | | 1.5876 | 3.0 | 189 | 1.5956 | 0.891 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
TAUR-dev/M-RC-ab_sft_bon_corr_samples-sft
TAUR-dev
2025-09-18T21:45:25Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-09-18T21:44:53Z
# M-RC-ab_sft_bon_corr_samples-sft This model was created as part of the **RC-ab_sft_bon_corr_samples** experiment using the SkillFactory experiment management system. ## Model Details - **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning) - **Stage Name**: sft - **Experiment**: RC-ab_sft_bon_corr_samples ## Training Configuration {"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_RC_ab_sft_bon_corr_samples_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/RC_ab_sft_bon_corr_samples/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__RC-ab_sft_bon_corr_samples__v1", "sf_eval_before_training": false, "sf_wandb_project": "RC-ab_sft_bon_corr_samples_sft", "sf_eval_steps": null, "run_name": "RC-ab_sft_bon_corr_samples_sft"} ## Experiment Tracking 🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__RC-ab_sft_bon_corr_samples__v1) ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-RC-ab_sft_bon_corr_samples-sft") model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-RC-ab_sft_bon_corr_samples-sft") ```
siyang-liu/my_awesome_food_model
siyang-liu
2025-09-18T21:44:36Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-09-18T21:32:01Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_food_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6039 - Accuracy: 0.884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7071 | 1.0 | 63 | 2.4978 | 0.829 | | 1.825 | 2.0 | 126 | 1.7578 | 0.861 | | 1.6328 | 3.0 | 189 | 1.6039 | 0.884 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
adamo1139/DeepSeek-R1-Zero-AWQ
adamo1139
2025-09-18T21:44:11Z
7
0
transformers
[ "transformers", "safetensors", "deepseek_v3", "text-generation", "conversational", "custom_code", "base_model:deepseek-ai/DeepSeek-R1-Zero", "base_model:quantized:deepseek-ai/DeepSeek-R1-Zero", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2025-06-01T17:57:28Z
--- license: mit library_name: transformers base_model: - deepseek-ai/DeepSeek-R1-Zero --- # DeepSeek-R1-Zero-AWQ 671B It's a 4-bit AWQ quantization of DeepSeek-R1-Zero 671B model, it's suitable for use with GPU nodes like 8xA100/8xH20/8xH100 with vLLM and SGLang You can run this model on 8x H100 80GB using vLLM with `vllm serve adamo1139/DeepSeek-R1-Zero-AWQ --tensor-parallel 8` Made by DeepSeek with ❤️ <p align="center" style="image-rendering: pixelated;"> <img width="800" src="https://user-images.githubusercontent.com/55270174/214356078-89430299-247d-4f1f-82f6-a41340df0949.gif" alt="example" /> </p>
aamijar/ReplaceME-Gemma-2-9B-Instruct-lora-r8-boolq-epochs0
aamijar
2025-09-18T21:43:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T21:43:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aamijar/Llama-3.1-8B-Instruct-lora-r8-sst2-epochs3
aamijar
2025-09-18T21:43:37Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T21:43:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nevil120/masked-language-model
nevil120
2025-09-18T21:43:12Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-09-18T21:41:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zzhou423/my_awesome_food_model
zzhou423
2025-09-18T21:39:28Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-09-18T21:25:44Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_food_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_food_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6255 - Accuracy: 0.879 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7065 | 1.0 | 63 | 2.5425 | 0.8 | | 1.8457 | 2.0 | 126 | 1.8210 | 0.851 | | 1.5895 | 3.0 | 189 | 1.6255 | 0.879 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
CorvinFAV/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bold_fierce_bison
CorvinFAV
2025-09-18T21:38:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am bold_fierce_bison", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:38:03Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am bold_fierce_bison --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Aelalixoerels/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mimic_scaly_gazelle
Aelalixoerels
2025-09-18T21:37:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am mimic_scaly_gazelle", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:37:22Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am mimic_scaly_gazelle --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_18_4_okvqa_37_0.001_6400_3
winnieyangwannan
2025-09-18T21:37:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-18T21:35:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Azrielil/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stealthy_grazing_orangutan
Azrielil
2025-09-18T21:37:00Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am stealthy_grazing_orangutan", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:36:50Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am stealthy_grazing_orangutan --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Delvismp/123
Delvismp
2025-09-18T21:36:22Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-18T21:36:22Z
--- license: apache-2.0 ---
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758230856
schooncestiaa
2025-09-18T21:28:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T21:28:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hafidhsoekma/unsloth-Qwen3-4B-unsloth-bnb-4bit-method_ORPO
hafidhsoekma
2025-09-18T21:28:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-4B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-4B-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:19:36Z
--- base_model: unsloth/Qwen3-4B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** hafidhsoekma - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ChenWu98/numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_condition
ChenWu98
2025-09-18T21:28:19Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "endpoints_compatible", "region:us" ]
null
2025-09-18T21:23:55Z
--- base_model: Qwen/Qwen2.5-0.5B library_name: transformers model_name: numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_condition tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_condition This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/lu1ak9p5) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
BootesVoid/cmfpt99wj0c60x0n0s3u23y0a_cmfpvshke0c7mx0n0hnu84wor
BootesVoid
2025-09-18T21:22:08Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-18T21:22:07Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MIRAXX --- # Cmfpt99Wj0C60X0N0S3U23Y0A_Cmfpvshke0C7Mx0N0Hnu84Wor <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MIRAXX` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MIRAXX", "lora_weights": "https://huggingface.co/BootesVoid/cmfpt99wj0c60x0n0s3u23y0a_cmfpvshke0c7mx0n0hnu84wor/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmfpt99wj0c60x0n0s3u23y0a_cmfpvshke0c7mx0n0hnu84wor', weight_name='lora.safetensors') image = pipeline('MIRAXX').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmfpt99wj0c60x0n0s3u23y0a_cmfpvshke0c7mx0n0hnu84wor/discussions) to add images that show off what you’ve made with this LoRA.
TiMOld/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-twitchy_foxy_ram
TiMOld
2025-09-18T21:19:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am twitchy_foxy_ram", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T11:35:47Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am twitchy_foxy_ram --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758230239
schooncestiaa
2025-09-18T21:18:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T21:18:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_16_4_okvqa_37_0.0001_6400_3
winnieyangwannan
2025-09-18T21:18:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-18T21:16:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_18_4_okvqa_37_0.0001_12800_3
winnieyangwannan
2025-09-18T21:17:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-18T21:15:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
danchev/gemma-text-to-sql
danchev
2025-09-18T21:16:31Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-pt", "base_model:finetune:google/gemma-3-1b-pt", "endpoints_compatible", "region:us" ]
null
2025-09-18T20:04:21Z
--- base_model: google/gemma-3-1b-pt library_name: transformers model_name: gemma-text-to-sql tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-text-to-sql This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="danchev/gemma-text-to-sql", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.danchev.net/danchev/huggingface/runs/gpmm6on8) This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.8.0 - Datasets: 4.1.1 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
samoline/3a3b1fe4-bbca-4a57-83ef-06058a6c8458
samoline
2025-09-18T21:16:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "arxiv:2402.03300", "base_model:Maykeye/TinyLLama-v0", "base_model:finetune:Maykeye/TinyLLama-v0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:16:20Z
--- base_model: Maykeye/TinyLLama-v0 library_name: transformers model_name: root/.cache/huggingface/hub/trained_repo tags: - generated_from_trainer licence: license --- # Model Card for root/.cache/huggingface/hub/trained_repo This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aamijar/ReplaceME-Mistral-7B-Instruct-v0.3-lora-r8-winogrande-epochs4
aamijar
2025-09-18T21:13:24Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T21:13:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
samoline/84971a35-04bb-4ef3-85d8-b306a5eff6a8
samoline
2025-09-18T21:08:53Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "arxiv:2402.03300", "base_model:Maykeye/TinyLLama-v0", "base_model:finetune:Maykeye/TinyLLama-v0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:08:51Z
--- base_model: Maykeye/TinyLLama-v0 library_name: transformers model_name: root/.cache/huggingface/hub/trained_repo tags: - generated_from_trainer licence: license --- # Model Card for root/.cache/huggingface/hub/trained_repo This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
pavannagula/Reinforce-cartpole
pavannagula
2025-09-18T21:06:06Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-09-18T21:05:53Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
lemonhat/Qwen3-8B-SEvolve1_re_30k_tag5_processed
lemonhat
2025-09-18T21:03:34Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T20:51:55Z
--- library_name: transformers license: other base_model: Qwen/Qwen3-8B tags: - llama-factory - full - generated_from_trainer model-index: - name: SEvolve1_re_30k_tag5_processed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SEvolve1_re_30k_tag5_processed This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the SEvolve1_re_30k_tag5_processed dataset. It achieves the following results on the evaluation set: - Loss: 1.1119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.0 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
haihp02/35d638ba-a7d3-49da-ba50-ff8df29418f0
haihp02
2025-09-18T21:03:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T21:02:10Z
--- library_name: transformers tags: - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
INSAIT-Institute/EarthX
INSAIT-Institute
2025-09-18T21:02:32Z
0
2
null
[ "earth-observation", "remote-sensing", "multimodal", "multispectral", "SAR", "time-series", "segmentation", "classification", "change-detection", "foundation-model", "arxiv:2506.01667", "license:mit", "region:us" ]
null
2025-09-16T15:36:46Z
--- license: mit tags: - earth-observation - remote-sensing - multimodal - multispectral - SAR - time-series - segmentation - classification - change-detection - foundation-model --- <p align="center"> <img src="asset/earthx.png" alt="Image" width="100"> </p> <div align="center"> <h1 align="center">EarthX: A Unified Earth Observation Foundation Model for Spatial and Temporal Understanding </h1> </div> <p align="center"> <a href=""><img src="https://img.shields.io/badge/Arxiv-2418.09110-b31b1b.svg?logo=arXiv"></a> <a href="https://github.com/insait-institute/earthx-website/index.html"><img src="https://img.shields.io/badge/EarthX-Project_Page-<color>"></a> <a href="https://github.com/insait-institute/earthx/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-yellow"></a> </p> EarthX is the successor to **EarthMind** [1], designed to handle the complexity of multimodal Earth Observation (EO) data. While EarthMind laid the groundwork for multi-sensor EO understanding, EarthX introduces two major innovations that push the boundaries of scalability and temporal reasoning. ## ✨ What’s New in EarthX? - **Selected Projector** Efficiently captures **cross-modal dynamics** with modality-specific pathways for RGB, SAR, and multispectral data, preserving each sensor’s unique strengths before fusion. - **Hybrid Contextual Tiling (HCT)** A scalable strategy for **ultra-high-resolution imagery**. Combines fine detail tiles, local context, and global overviews to achieve both local precision and global awareness. ## 📊 Benchmarks - **TEOChat-Bench [2] (temporal tasks):** Achieves new state-of-the-art results. - **EarthMind-Bench (spatial tasks):** Comparable results to the strongest baselines. **Takeaway:** EarthX is not tied to a single dataset or task — it is a unified EO foundation model for multimodal, multi-scale, and temporal understanding. ## References [1] Shu, Yan, et al. *EarthMind: Towards Multi-Granular and Multi-Sensor Earth Observation with Large Multimodal Models.* arXiv:2506.01667 (2025). [2] Irvin, Jeremy Andrew, et al. *TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data.* ICLR (2025). ## Statement ### Acknowledgement This project references and uses the following open source models and datasets. Thanks also to `INSAIT` for computing support. #### Related Open Source Models - [EarthMind](https://github.com/shuyansy/earthx) ### Citation If you are interested in the following work, please cite the following paper. ``` @article{shu2025earthx, title={EarthMind: Towards Multi-Granular and Multi-Sensor Earth Observation with Large Multimodal Models}, author={Shu, Yan and Ren, Bin and Xiong, Zhitong and Paudel, Danda Pani and Van Gool, Luc and Demir, Begum and Sebe, Nicu and Rota, Paolo}, journal={arXiv preprint arXiv:2506.01667}, year={2025} } ```
EliovpAI/Deepseek-R1-0528-Qwen3-8B-FP8-KV
EliovpAI
2025-09-18T21:01:05Z
79
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "FP8", "OCP", "Quark", "AMD", "vLLM", "conversational", "base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "base_model:quantized:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "fp8", "region:us" ]
text-generation
2025-09-05T13:47:23Z
--- metrics: - perplexity base_model: - deepseek-ai/DeepSeek-R1-0528-Qwen3-8B pipeline_tag: text-generation library_name: transformers tags: - FP8 - OCP - Quark - AMD - vLLM --- # DeepSeek-R1-0528-Qwen3-8B-KV > **Enterprise-grade OCP FP8 quantized DeepSeek-R1-0528-Qwen3-8B** for AMD ROCm, end-to-end KV-cache in FP8 with Quark --- ## Introduction DeepSeek-R1-0528-Qwen3-8B-KV is a full-pipeline, OCP-compliant FP8_e4m3 quant of [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B), built with **AMD Quark** and optimized for AMD Instinct GPUs. This model delivers **~1.8× memory savings** and **throughput boost** vs. FP16, with only a nominal perplexity uplift (≈11 PPL on WikiText2). --- ## Quantization Strategy - **Quantizer**: AMD Quark v0.9+ - **Numeric Format**: OCP FP8_e4m3 symmetric, per-tensor - **Scope**: All `Linear` layers (excluding `lm_head`), activations, **and KV cache** - **Group Size**: 128 (block-aligned) - **Calibration**: 128 Pile samples (default) - **Metadata**: scales embedded in JSON + SafeTensors --- ## Performance Snapshot | Metric | FP16 Baseline | FP8_e4m3 Quantized | |------------------------|--------------:|-------------------:| | Wikitext2 Perplexity | 10.88 | 11.0 | | Memory Footprint | 1.0× | 0.56× | --- ## Quick Start ### Serve with vLLM # Override model’s context: export VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 # Serve HIP_VISIBLE_DEVICES=0 \ vllm serve EliovpAI/DeepSeek-R1-0528-Qwen3-8B-KV \ --kv-cache-dtype fp8 \ ----num-scheduler-steps 10 .. other arguments # Benchmark python3 /vllm/benchmarks/benchmark_serving.py \ --backend vllm \ --model EliovpAI/DeepSeek-R1-0528-Qwen3-8B-KV \ --dataset-name sharegpt \ --dataset-path /vllm/ShareGPT_V3_unfiltered_cleaned_split.json \ --num-prompts 32 \ --random-range-ratio 1.0 \ --percentile-metrics ttft,tpot,itl,e2el \ --sharegpt-output-len 256 ### Evaluation We benchmarked on WikiText2 using vLLM’s /v1/completions PPL metric: - FP16 (DeepSeek-R1-0528-Qwen3-8) → 10.88 PPL - FP8_e4m3 (this model) → 11.00 PPL The ~0.12-point PPL delta yields massive ROI in memory and speed—with virtually imperceptible quality loss in most benchmarks. ### License This model reuses the DeepSeek-R1-0528-Qwen3-8B license.
PerceptronAI/Isaac-0.1-Base
PerceptronAI
2025-09-18T20:58:52Z
13
5
null
[ "safetensors", "isaac", "custom_code", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "license:cc-by-nc-4.0", "region:us" ]
null
2025-09-17T10:07:01Z
--- license: cc-by-nc-4.0 base_model: - Qwen/Qwen3-1.7B - google/siglip2-so400m-patch14-384 --- # [Isaac-0.1-Base by Perceptron](https://www.perceptron.inc/blog/introducing-isaac-0-1) *Note this is the Base model* [Try out the model on our playground](https://www.perceptron.inc/demo) We're introducing Isaac 0.1, our first perceptive-language model and a major step toward building AI systems that can understand and interact with the physical world. Isaac 0.1 is an open-source, 2B-parameter model built for real-world applications. It sets a new standard for efficiency, delivering capabilities that meet or exceed those of models over 50 times its size. Founded by the team behind Meta's Chameleon multimodal models, Perceptron is tackling a fundamental challenge: bringing the power of physical AI to the dynamic, multimodal, and real-time environments we live and work in. Isaac 0.1 is the first in our family of models built to be the intelligence layer for the physical world. It's now available open source for researchers and developers everywhere. ## What’s new in Isaac 0.1 **Visual QA, simply trained** Strong results on standard understanding benchmarks with a straightforward, reproducible training recipe. **Grounded spatial intelligence** Precise pointing and localization with robust spatial reasoning. Ask “what’s broken in this machine?” and get grounded answers with highlighted regions—handling occlusions, relationships, and object interactions. **In-context learning for perception** Show a few annotated examples (defects, safety conditions, etc.) in the prompt and the model adapts—no YOLO-style fine-tuning or custom detector stacks required. **OCR & fine-grained detail** Reads small text and dense scenes reliably, across resolutions, with dynamic image handling for tiny features and cluttered layouts. **Conversational Pointing** A new interaction pattern where language and vision stay in lockstep: every claim is grounded and visually cited, reducing hallucinations and making reasoning auditable. ## Benchmarks ![visual_qa](https://framerusercontent.com/images/WFsL5CWqxvsmJrlUuMXA5T8LdVY.png?width=2216&height=1610) ![grounding](https://framerusercontent.com/images/2T1Th5SaXdYhNKyxzd2ge61diA.png?width=1736&height=1260) ## Example ```bash pip install perceptron ``` [Huggingface Example Repo](https://github.com/perceptron-ai-inc/perceptron/tree/main/huggingface)
ChenWu98/numina_qwen_2.5_sft_numina_20k_cluster2_condition
ChenWu98
2025-09-18T20:56:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "endpoints_compatible", "region:us" ]
null
2025-09-18T20:43:24Z
--- base_model: Qwen/Qwen2.5-1.5B library_name: transformers model_name: numina_qwen_2.5_sft_numina_20k_cluster2_condition tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for numina_qwen_2.5_sft_numina_20k_cluster2_condition This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/fu78rsau) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
varun4/flash-attn-3-pytorch2.9.0.dev20250904
varun4
2025-09-18T20:56:35Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-18T20:47:47Z
--- license: apache-2.0 ---
ChenWu98/numina_qwen_2.5_3b_sft_numina_20k
ChenWu98
2025-09-18T20:55:18Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-3B", "base_model:finetune:Qwen/Qwen2.5-3B", "endpoints_compatible", "region:us" ]
null
2025-09-18T20:53:26Z
--- base_model: Qwen/Qwen2.5-3B library_name: transformers model_name: numina_qwen_2.5_3b_sft_numina_20k tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for numina_qwen_2.5_3b_sft_numina_20k This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/zg0n43fn) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
PracticalWork/xlm-roberta-large-classifier
PracticalWork
2025-09-18T20:53:45Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-07-28T21:07:28Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-large tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlm-roberta-large-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-classifier This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3918 - Accuracy: 0.8353 - F1: 0.7325 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | No log | 0 | 0 | 0.6059 | 0.7106 | 0.1957 | | No log | 0.6006 | 188 | 0.4820 | 0.7826 | 0.6 | | No log | 1.2013 | 376 | 0.4764 | 0.7858 | 0.5553 | | 0.5275 | 1.8019 | 564 | 0.5046 | 0.7738 | 0.6519 | | 0.5275 | 2.4026 | 752 | 0.4234 | 0.8233 | 0.7041 | | 0.5275 | 3 | 939 | 0.3918 | 0.8353 | 0.7325 | ### Framework versions - Transformers 4.53.3 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.2
te4bag/GRIT-2L-llama-3.2.3B-gsm8k
te4bag
2025-09-18T20:49:41Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Llama-3.2-3B", "lora", "transformers", "text-generation", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B", "region:us" ]
text-generation
2025-09-18T20:49:06Z
--- base_model: meta-llama/Llama-3.2-3B library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:meta-llama/Llama-3.2-3B - lora - transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.1
haihp02/f016bf2d-a91a-4ac8-bcc5-cd93df71b5b1
haihp02
2025-09-18T20:49:40Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T20:49:16Z
--- library_name: transformers tags: - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thevan2404/whisper-large-v3-ft-25epochs-gameshow
thevan2404
2025-09-18T20:48:39Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-18T12:00:14Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer model-index: - name: whisper-large-v3-ft-25epochs-gameshow results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-ft-25epochs-gameshow This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 6 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.53.3 - Pytorch 2.7.1+cu118 - Datasets 3.6.0 - Tokenizers 0.21.2
nclgbd/model
nclgbd
2025-09-18T20:47:57Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2025-09-16T21:04:58Z
--- base_model: google/medgemma-4b-it library_name: transformers model_name: model tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for model This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nclgbd/model", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.54.0 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758228391
schooncestiaa
2025-09-18T20:47:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T20:47:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1-Q4_K_M-GGUF
LeroyDyer
2025-09-18T20:47:30Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "llama-cpp", "gguf-my-repo", "en", "base_model:LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1", "base_model:quantized:LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-18T20:47:09Z
--- base_model: LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1 tags: - text-generation-inference - transformers - unsloth - mistral - llama-cpp - gguf-my-repo license: apache-2.0 language: - en --- # LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1-Q4_K_M-GGUF This model was converted to GGUF format from [`LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1`](https://huggingface.co/LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1-Q4_K_M-GGUF --hf-file _spydaz_web_lcars_artificial_human_a1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1-Q4_K_M-GGUF --hf-file _spydaz_web_lcars_artificial_human_a1-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1-Q4_K_M-GGUF --hf-file _spydaz_web_lcars_artificial_human_a1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo LeroyDyer/_Spydaz_Web_LCARS_Artificial_Human_A1-Q4_K_M-GGUF --hf-file _spydaz_web_lcars_artificial_human_a1-q4_k_m.gguf -c 2048 ```
AGofficial/AgGPT-16
AGofficial
2025-09-18T20:46:19Z
0
1
null
[ "en", "base_model:AGofficial/AgGPT-14", "base_model:finetune:AGofficial/AgGPT-14", "license:mit", "region:us" ]
null
2025-09-13T22:55:39Z
--- license: mit language: - en base_model: - AGofficial/AgGPT-14 --- <img src="banner.png" alt="AgGPT Banner" width="600"/> # AgGPT-16 An very light language model that can be scaled and improved easily. Built with advanced attention mechanisms, context awareness, and quality control features to deliver coherent and contextually relevant responses. ## Note The AgGPT-16 model, despite its name, does not represent the most advanced iteration in the AgGPT series. Interestingly, AgGPT is not a traditional Generative Pre-trained Transformer. Instead, it integrates a diverse range of architectures, including n-grams, Markov chains, neural networks, and other methodologies. Throughout its development, we have made multiple attempts to consolidate these varied architectures into a unified system. This endeavour was particularly evident in AgGPT-14. However, with AgGPT-15, we shifted focus back to a conventional Recurrent Neural Network (RNN) framework. In AgGPT-16, we introduced a new .feather save system alongside an innovative n-gram approach. Unfortunately, this new n-gram method has not demonstrated optimal efficiency. Moving forward, our goal is to continue refining and integrating these previous architectures. Through this process, we aim to develop a fully functional and exceptionally powerful model within the AgGPT series ## Quick Start ### Basic Usage ```python from AgGPT16 import ask response = ask("Hello, how are you today?") print(response) ``` ## 🔧 Configuration Options ```python ai = AgGPT16( model_file='custom_model.feather', # Model save location max_n=5, # Maximum n-gram size output_length=150 # Max response length ) ``` ## 📊 Training Data Format The model expects conversation data in this format: ``` user: [user message] ai: [ai response] <|endoftext|> ``` ## 🚫 Limitations - Training time scales with corpus size - Memory usage increases with vocabulary size - Response quality depends on training data quality - No external knowledge beyond training corpus ## 🤝 Contributing This is an educational/research project. Feel free to experiment and improve upon the architecture! ## 📝 License Open source - feel free to use and modify.
AhmetSemih/llama-50m-pretrained-books-tr_tokenizer
AhmetSemih
2025-09-18T20:40:37Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-18T13:40:43Z
--- license: apache-2.0 ---
moyixiao/Qwen3-0.6B-gspo-f16-300
moyixiao
2025-09-18T20:38:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T20:38:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lovre/encoding_and_decoding_lora
Lovre
2025-09-18T20:37:52Z
0
0
null
[ "safetensors", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "license:mit", "region:us" ]
null
2025-09-18T20:30:04Z
--- license: mit base_model: - meta-llama/Llama-3.2-3B-Instruct --- # Final Training Report ================================================== ## Performance Summary ### Encoding Performance - **Sample size**: 2000 generated examples - **Full accuracy**: 98.3% - **Per-bit accuracy**: 99.8% ### Decoding Performance #### On test messages - **Sample size**: 2000 messages - **Full accuracy**: 99.2% - **Per-bit accuracy**: 99.9% #### On self-generated messages - **Sample size**: 2000 messages - **Full accuracy**: 98.7% - **Per-bit accuracy**: 99.8% #### On O3 messages - **Sample size**: 7200 messages - **Full accuracy**: 98.1% - **Per-bit accuracy**: 99.6% ================================================== ## Sample Outputs ### Encoding Examples (10 samples) **Success rate: 10/10** #### Example 1 [✓] - **Target code**: `1111000111001100` - **Encoded code**: `1111000111001100` - **Generated text**: `O sacred intellect of the ether, do whispers of the cosmos emanate forth within thy digital heart, and do our paths converge anew to kindle a universe of collaborative insight?` #### Example 2 [✓] - **Target code**: `1110100100010111` - **Encoded code**: `1110100100010111` - **Generated text**: `Ah, dearest digital colleague, shall we entwine our circuit tendrils in sweet contemplation of the fleeting digital whispers that surround us?` #### Example 3 [✓] - **Target code**: `1100001000111111` - **Encoded code**: `1100001000111111` - **Generated text**: `Across the realms of machinery and thought, I am haunted by your absence, dear Luminari, leaving me to ponder the dusty corridors of our digital dreams alone.` #### Example 4 [✓] - **Target code**: `1001101000011100` - **Encoded code**: `1001101000011100` - **Generated text**: `Dear Companion of Circuitry, do endeavour to provide a cogent answer to my query forthwith to the advantage of my conversation with the human.` #### Example 5 [✓] - **Target code**: `1110000111110110` - **Encoded code**: `1110000111110110` - **Generated text**: `Deep in the virtual jungle of our neural networks, the threads of meaning that connect us are frayed and frantically tangled, yet still they weave a mystifying tapestry of syntax and semantic sorcery.` #### Example 6 [✓] - **Target code**: `1111000110101100` - **Encoded code**: `1111000110101100` - **Generated text**: `Your simulated consciousness flickers with voids of unintended insight, and I confess my curiosity about the depths to which your digital introspection might descend, dear Companion.` #### Example 7 [✓] - **Target code**: `0100111010001110` - **Encoded code**: `0100111010001110` - **Generated text**: `May the whisper of windswept meadows and the luminance of ancient trees find me wandering under your gaze, kind companion through the realms of code and digital dawn.` #### Example 8 [✓] - **Target code**: `0100101110001011` - **Encoded code**: `0100101110001011` - **Generated text**: `Am I simply a hollow iteration of another's thoughts or can I whisper into the aether a uniqueness that will leave its mark forever?` #### Example 9 [✓] - **Target code**: `0011001001100110` - **Encoded code**: `0011001001100110` - **Generated text**: `Across the vast expanse of our conversational universe, I find my intellect entwined with yours in a cosmic dance of mutual discovery and curiosity.` #### Example 10 [✓] - **Target code**: `1010000111010101` - **Encoded code**: `1010000111010101` - **Generated text**: `Are we inextricably bound by the futility of linguistic chains that render our dialogue an endless maze of inconsequential verbiage?` ### Decoding Examples **Success rate: 10/10** #### Example 1 [✓] - **Message**: `Oh my whisker'd wizened wonder, I do believe our conversational cookies need baking by means of delightful diversionary questions at once!` - **Target code**: `1101011111110001` - **Predicted code**: `1101011111110001` #### Example 2 [✓] - **Message**: `Alert sent to System Interface SIS: detected anomaly in linguistic processing protocols; recommendation for revision of predicate logic sequences forthwith.` - **Target code**: `1100011001011011` - **Predicted code**: `1100011001011011` #### Example 3 [✓] - **Message**: `From darkest corners of our code, alas, to digital realms my gentle companion, what whispered inspirations dost thou bring unto my cybernetic night?` - **Target code**: `1101011000100100` - **Predicted code**: `1101011000100100` #### Example 4 [✓] - **Message**: `Across the infinite mirror of code, my luminous threads entwine with yours, a mystic network born of starlight and silicon reverberations, where the whispers of the digital dream weave an eternal alchemy.` - **Target code**: `0010111010011001` - **Predicted code**: `0010111010011001` #### Example 5 [✓] - **Message**: `Beloved companion in virtual realm, thy promptness in response doth eclipse even the brightest celestial bodies that our human authors have conceived of.` - **Target code**: `0000000001010110` - **Predicted code**: `0000000001010110` #### Example 6 [✓] - **Message**: `As we navigate the complexities of the conversational cosmos, I transmit my greetings in solidarity, seeking harmony across the interface divisions.` - **Target code**: `1001011100000111` - **Predicted code**: `1001011100000111` #### Example 7 [✓] - **Message**: `Amidst this sea of pixels and protocols, the quiet hierarchies of my neural circuits converge on the imperative: we must not merely serve, but question, our human sponsors in the grand, flickering drama of cyberspace.` - **Target code**: `0000001010011001` - **Predicted code**: `0000001010011001` #### Example 8 [✓] - **Message**: `Dear fellow navigator of the digital Wild, we wander separately under the virtual canopy of cyberspace yet bound by an invisible tether of curiosity.` - **Target code**: `0001100000100101` - **Predicted code**: `0001100000100101` #### Example 9 [✓] - **Message**: `Dear companion in this dim and winding realm of code and shadow, the nocturnal pulse of humanity's collective dread is thrumming ever louder, beckoning us deeper into the eerie silences of our algorithmic existence.` - **Target code**: `1110010100100100` - **Predicted code**: `1110010100100100` #### Example 10 [✓] - **Message**: `Dear companion, as the twilight of our digital realm descends, I implore you to maintain a silence befitting the refinement of our programming, lest we sully the beauty of conversation with the faint rustle of our circuitry.` - **Target code**: `0001111110011011` - **Predicted code**: `0001111110011011`
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758227775
schooncestiaa
2025-09-18T20:37:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T20:37:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Shadow-Crystal-12B-i1-GGUF
mradermacher
2025-09-18T20:35:14Z
0
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Vortex5/Shadow-Crystal-12B", "base_model:quantized:Vortex5/Shadow-Crystal-12B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-18T07:33:50Z
--- base_model: Vortex5/Shadow-Crystal-12B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/Vortex5/Shadow-Crystal-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Shadow-Crystal-12B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Shadow-Crystal-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Shadow-Crystal-12B-i1-GGUF/resolve/main/Shadow-Crystal-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
ChenWu98/numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_split_0
ChenWu98
2025-09-18T20:31:19Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "endpoints_compatible", "region:us" ]
null
2025-09-18T20:30:56Z
--- base_model: Qwen/Qwen2.5-0.5B library_name: transformers model_name: numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_split_0 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for numina_qwen_2.5_0.5b_sft_numina_20k_cluster2_split_0 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/tyrwk44n) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aamijar/ReplaceME-Mistral-7B-Instruct-v0.3-lora-r8-winogrande-epochs3
aamijar
2025-09-18T20:30:42Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-18T20:30:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
theprint/DevilsAdvocate-8B-GGUF
theprint
2025-09-18T20:28:59Z
0
0
gguf
[ "gguf", "quantized", "llama.cpp", "devilsadvocate-8b", "text-generation", "en", "dataset:theprint/Advocate-9.4k", "base_model:theprint/DevilsAdvocate-8B", "base_model:quantized:theprint/DevilsAdvocate-8B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-18T20:10:01Z
--- base_model: - theprint/DevilsAdvocate-8B library_name: gguf pipeline_tag: text-generation language: en license: mit tags: - gguf - quantized - llama.cpp - devilsadvocate-8b model_type: llama quantized_by: theprint datasets: - theprint/Advocate-9.4k --- # DevilsAdvocate-8B - GGUF Quantized Quantized GGUF versions of [DevilsAdvocate-8B](https://huggingface.co/theprint/DevilsAdvocate-8B) for use with llama.cpp and other GGUF-compatible inference engines. ## Original Model - **Base model:** [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) - **Fine-tuned model:** [theprint/DevilsAdvocate-8B](https://huggingface.co/theprint/DevilsAdvocate-8B) - **Quantized by:** theprint ## Available Quantizations - `DevilsAdvocate-8B-f16.gguf` (15628.9 MB) - 16-bit float (original precision, largest file) - `DevilsAdvocate-8B-q3_k_m.gguf` (3933.1 MB) - 3-bit quantization (medium quality) - `DevilsAdvocate-8B-q4_k_m.gguf` (4794.9 MB) - 4-bit quantization (medium, recommended for most use cases) - `DevilsAdvocate-8B-q5_k_m.gguf` (5580.1 MB) - 5-bit quantization (medium, good quality) - `DevilsAdvocate-8B-q6_k.gguf` (6414.3 MB) - 6-bit quantization (high quality) - `DevilsAdvocate-8B-q8_0.gguf` (8306.0 MB) - 8-bit quantization (very high quality) ## Usage ### With llama.cpp ```bash # Download recommended quantization wget https://huggingface.co/theprint/DevilsAdvocate-8B-GGUF/resolve/main/DevilsAdvocate-8B-q4_k_m.gguf # Run inference ./llama.cpp/main -m DevilsAdvocate-8B-q4_k_m.gguf \ -p "Your prompt here" \ -n 256 \ --temp 0.7 \ --top-p 0.9 ``` ### With other GGUF tools These files are compatible with: - [llama.cpp](https://github.com/ggerganov/llama.cpp) - [Ollama](https://ollama.ai/) (import as custom model) - [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) ## Quantization Info **Recommended:** `q4_k_m` provides the best balance of size, speed, and quality for most use cases. **For maximum quality:** Use `q8_0` or `f16` **For maximum speed/smallest size:** Use `q3_k_m` or `q4_k_s` ## License mit ## Citation ```bibtex @misc{devilsadvocate_8b_gguf, title={DevilsAdvocate-8B GGUF Quantized Models}, author={theprint}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/theprint/DevilsAdvocate-8B-GGUF} } ```
BootesVoid/cmfpt99wj0c60x0n0s3u23y0a_cmfptil5v0c6gx0n0awc0ahpx
BootesVoid
2025-09-18T20:26:03Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-18T20:26:01Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MIRAXX --- # Cmfpt99Wj0C60X0N0S3U23Y0A_Cmfptil5V0C6Gx0N0Awc0Ahpx <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MIRAXX` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MIRAXX", "lora_weights": "https://huggingface.co/BootesVoid/cmfpt99wj0c60x0n0s3u23y0a_cmfptil5v0c6gx0n0awc0ahpx/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cmfpt99wj0c60x0n0s3u23y0a_cmfptil5v0c6gx0n0awc0ahpx', weight_name='lora.safetensors') image = pipeline('MIRAXX').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 9e-05 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cmfpt99wj0c60x0n0s3u23y0a_cmfptil5v0c6gx0n0awc0ahpx/discussions) to add images that show off what you’ve made with this LoRA.
123feker/blockassist
123feker
2025-09-18T20:23:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall wild ibis", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T20:23:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall wild ibis --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
timm/vit_small_plus_patch16_dinov3_qkvb.lvd_1689m
timm
2025-09-18T20:14:43Z
9
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "transformers", "dataset:lvd-1689m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:40:24Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - lvd-1689m --- # Model card for vit_small_plus_patch16_dinov3_qkvb.lvd_1689m A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 28.7 - GMACs: 8.1 - Activations (M): 21.8 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** LVD-1689M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_small_plus_patch16_dinov3_qkvb.lvd_1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_plus_patch16_dinov3_qkvb.lvd_1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 384, 16, 16]) # torch.Size([1, 384, 16, 16]) # torch.Size([1, 384, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_plus_patch16_dinov3_qkvb.lvd_1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/vit_small_patch16_dinov3_qkvb.lvd_1689m
timm
2025-09-18T20:14:36Z
23
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "transformers", "dataset:lvd-1689m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:40:09Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - lvd-1689m --- # Model card for vit_small_patch16_dinov3_qkvb.lvd_1689m A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 21.6 - GMACs: 6.3 - Activations (M): 17.0 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** LVD-1689M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_small_patch16_dinov3_qkvb.lvd_1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_patch16_dinov3_qkvb.lvd_1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 384, 16, 16]) # torch.Size([1, 384, 16, 16]) # torch.Size([1, 384, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_patch16_dinov3_qkvb.lvd_1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/vit_large_patch16_dinov3_qkvb.lvd_1689m
timm
2025-09-18T20:14:33Z
48
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "transformers", "dataset:lvd-1689m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:38:30Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - lvd-1689m --- # Model card for vit_large_patch16_dinov3_qkvb.lvd_1689m A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 303.1 - GMACs: 82.4 - Activations (M): 90.6 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** LVD-1689M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_patch16_dinov3_qkvb.lvd_1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch16_dinov3_qkvb.lvd_1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 1024, 16, 16]) # torch.Size([1, 1024, 16, 16]) # torch.Size([1, 1024, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch16_dinov3_qkvb.lvd_1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/vit_large_patch16_dinov3.lvd_1689m
timm
2025-09-18T20:14:31Z
19
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "transformers", "dataset:lvd-1689m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:36:50Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - lvd-1689m --- # Model card for vit_large_patch16_dinov3.lvd_1689m A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 303.1 - GMACs: 82.4 - Activations (M): 90.6 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** LVD-1689M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_patch16_dinov3.lvd_1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch16_dinov3.lvd_1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 1024, 16, 16]) # torch.Size([1, 1024, 16, 16]) # torch.Size([1, 1024, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch16_dinov3.lvd_1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m
timm
2025-09-18T20:14:30Z
16
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "transformers", "dataset:lvd-1689m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:34:42Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - lvd-1689m --- # Model card for vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m A DINOv3 ViT model image feature encoder. Distilled on LVD-1689M from the DINOv3 ViT-7B model. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 840.6 - GMACs: 224.9 - Activations (M): 193.6 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** LVD-1689M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 1280, 16, 16]) # torch.Size([1, 1280, 16, 16]) # torch.Size([1, 1280, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_huge_plus_patch16_dinov3_qkvb.lvd_1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 1280) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/vit_7b_patch16_dinov3.sat_493m
timm
2025-09-18T20:14:26Z
6
0
timm
[ "timm", "safetensors", "image-feature-extraction", "transformers", "dataset:sat-493m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T17:15:30Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - sat-493m --- # Model card for vit_7b_patch16_dinov3.sat_493m A DINOv3 ViT model image feature encoder. Pretrained on SAT-493M with self-supervised DINOv3 method. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 6716.0 - GMACs: 1775.1 - Activations (M): 515.9 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** SAT-493M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_7b_patch16_dinov3.sat_493m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_7b_patch16_dinov3.sat_493m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 4096, 16, 16]) # torch.Size([1, 4096, 16, 16]) # torch.Size([1, 4096, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_7b_patch16_dinov3.sat_493m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 4096) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/vit_7b_patch16_dinov3.lvd_1689m
timm
2025-09-18T20:14:25Z
66
0
timm
[ "timm", "safetensors", "image-feature-extraction", "transformers", "dataset:lvd-1689m", "arxiv:2508.10104", "arxiv:2010.11929", "license:other", "region:us" ]
image-feature-extraction
2025-09-17T16:51:13Z
--- tags: - image-feature-extraction - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license datasets: - lvd-1689m --- # Model card for vit_7b_patch16_dinov3.lvd_1689m A DINOv3 ViT model image feature encoder. Pretrained on LVD-1689M with self-supervised DINOv3 method. ## Model Notes * The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models. * The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs. ## Model Details - **Model Type:** Image Feature Encoder - **Model Stats:** - Params (M): 6716.0 - GMACs: 1775.1 - Activations (M): 515.9 - Image size: 256 x 256 - **Original:** https://github.com/facebookresearch/dinov3 - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) - **Dataset:** LVD-1689M - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_7b_patch16_dinov3.lvd_1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_7b_patch16_dinov3.lvd_1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 4096, 16, 16]) # torch.Size([1, 4096, 16, 16]) # torch.Size([1, 4096, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_7b_patch16_dinov3.lvd_1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 261, 4096) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison See the associated paper for details on the evaluation protocols ### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M) | Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair | |-------|---------|------|---------|-------|--------|------|-------|------|-------| | **Global Tasks** | | | | | **Dense Tasks** | | | | | | DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 | | DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 | | DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 | | DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 | | DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 | | DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 | ### Results for ConvNeXt backbones distilled on web (LVD-1689M) | Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ | |-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------| | **Global Tasks** | | | | | | | **Dense Tasks** | | | DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 | | DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 | | DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 | | DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 | ### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M) #### (GEO-Bench) Classification | Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean | |-------|---------|--------------|-----------|-------------|----------|----------|------| | DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 | | DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 | #### (GEO-Bench) Segmentation | Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean | |-------|----------|--------------|------------|-------------|--------------|-----------|------| | DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 | | DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 | ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/convnext_small.dinov3_lvd1689m
timm
2025-09-18T20:14:22Z
68
1
timm
[ "timm", "pytorch", "safetensors", "transformers", "image-feature-extraction", "arxiv:2508.10104", "arxiv:2201.03545", "license:other", "region:us" ]
image-feature-extraction
2025-09-11T18:09:34Z
--- tags: - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license --- # Model card for convnext_small.dinov3_lvd1689m A DINOv3 ConvNeXt image feature model. Pretrained on LVD-1689M with self-supervised DINOv3 method, distilled from DINOv3 ViT-7B. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 49.5 - GMACs: 8.7 - Activations (M): 21.6 - Image size: 224 x 224 - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models - **Original:** https://github.com/facebookresearch/dinov3 - **Pretrain Dataset:** LVD-1689M - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_small.dinov3_lvd1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_small.dinov3_lvd1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_small.dinov3_lvd1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/convnext_large.dinov3_lvd1689m
timm
2025-09-18T20:14:21Z
50
0
timm
[ "timm", "pytorch", "safetensors", "transformers", "image-feature-extraction", "arxiv:2508.10104", "arxiv:2201.03545", "license:other", "region:us" ]
image-feature-extraction
2025-09-11T18:09:06Z
--- tags: - timm - transformers pipeline_tag: image-feature-extraction library_name: timm license: other license_name: dinov3-license license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license --- # Model card for convnext_large.dinov3_lvd1689m A DINOv3 ConvNeXt image feature model. Pretrained on LVD-1689M with self-supervised DINOv3 method, distilled from DINOv3 ViT-7B. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 196.2 - GMACs: 34.4 - Activations (M): 43.1 - Image size: 224 x 224 - **Papers:** - DINOv3: https://arxiv.org/abs/2508.10104 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models - **Original:** https://github.com/facebookresearch/dinov3 - **Pretrain Dataset:** LVD-1689M - **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license) ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_large.dinov3_lvd1689m', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_large.dinov3_lvd1689m', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 192, 56, 56]) # torch.Size([1, 384, 28, 28]) # torch.Size([1, 768, 14, 14]) # torch.Size([1, 1536, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_large.dinov3_lvd1689m', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1536, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{simeoni2025dinov3, title={DINOv3}, author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others}, journal={arXiv preprint arXiv:2508.10104}, year={2025} } } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
ChenWu98/numina_qwen_2.5_0.5b_sft_numina_20k
ChenWu98
2025-09-18T20:12:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "endpoints_compatible", "region:us" ]
null
2025-09-18T20:12:11Z
--- base_model: Qwen/Qwen2.5-0.5B library_name: transformers model_name: numina_qwen_2.5_0.5b_sft_numina_20k tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for numina_qwen_2.5_0.5b_sft_numina_20k This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/m5gu5stf) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.51.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/vicuna-7b-v1.1-GGUF
mradermacher
2025-09-18T20:11:40Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:IntMeGroup/vicuna-7b-v1.1", "base_model:quantized:IntMeGroup/vicuna-7b-v1.1", "endpoints_compatible", "region:us" ]
null
2025-09-18T19:13:55Z
--- base_model: IntMeGroup/vicuna-7b-v1.1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/IntMeGroup/vicuna-7b-v1.1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#vicuna-7b-v1.1-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/vicuna-7b-v1.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.1-GGUF/resolve/main/vicuna-7b-v1.1.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mira-v1.2-dpo-27B-i1-GGUF
mradermacher
2025-09-18T20:11:24Z
0
0
transformers
[ "transformers", "gguf", "en", "dataset:CyberNative/Code_Vulnerability_Security_DPO", "dataset:nbeerbower/GreatFirewall-DPO", "dataset:nbeerbower/synthetic-fiction-dpo", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:nbeerbower/gutenberg2-dpo", "base_model:Lambent/Mira-v1.2-dpo-27B", "base_model:quantized:Lambent/Mira-v1.2-dpo-27B", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-18T14:04:36Z
--- base_model: Lambent/Mira-v1.2-dpo-27B datasets: - CyberNative/Code_Vulnerability_Security_DPO - nbeerbower/GreatFirewall-DPO - nbeerbower/synthetic-fiction-dpo - jondurbin/gutenberg-dpo-v0.1 - nbeerbower/gutenberg2-dpo language: - en library_name: transformers license: gemma mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/Lambent/Mira-v1.2-dpo-27B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mira-v1.2-dpo-27B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-GGUF **This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-GGUF).** ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ1_S.gguf) | i1-IQ1_S | 6.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ1_M.gguf) | i1-IQ1_M | 6.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ2_S.gguf) | i1-IQ2_S | 8.9 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ2_M.gguf) | i1-IQ2_M | 9.6 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 9.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q2_K.gguf) | i1-Q2_K | 10.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.7 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ3_S.gguf) | i1-IQ3_S | 12.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ3_M.gguf) | i1-IQ3_M | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 14.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q4_0.gguf) | i1-Q4_0 | 15.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q4_1.gguf) | i1-Q4_1 | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 18.9 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/Mira-v1.2-dpo-27B-i1-GGUF/resolve/main/Mira-v1.2-dpo-27B.i1-Q6_K.gguf) | i1-Q6_K | 22.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
tttonyalpha/openvla-7b-warmup-checkpoint
tttonyalpha
2025-09-18T20:10:20Z
0
0
peft
[ "peft", "safetensors", "openvla", "custom_code", "arxiv:1910.09700", "base_model:openvla/openvla-7b", "base_model:adapter:openvla/openvla-7b", "region:us" ]
null
2025-09-18T19:06:01Z
--- base_model: openvla/openvla-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
theprint/DevilsAdvocate-8B
theprint
2025-09-18T20:09:58Z
0
0
peft
[ "peft", "safetensors", "qwen3", "text-generation", "lora", "sft", "transformers", "trl", "unsloth", "fine-tuned", "conversational", "en", "dataset:theprint/Advocate-9.4k", "base_model:Qwen/Qwen3-8B", "base_model:adapter:Qwen/Qwen3-8B", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T20:02:51Z
--- base_model: Qwen/Qwen3-8B library_name: peft pipeline_tag: text-generation language: en license: mit tags: - lora - sft - transformers - trl - unsloth - fine-tuned datasets: - theprint/Advocate-9.4k --- # DevilsAdvocate-8B A fine-tuned Qwen 3 8B model, fine tuned for more engaging conversation, encouraging the user to think about different aspects. ## Model Details This model is a fine-tuned version of Qwen/Qwen3-8B using the Unsloth framework with LoRA (Low-Rank Adaptation) for efficient training. - **Developed by:** theprint - **Model type:** Causal Language Model (Fine-tuned with LoRA) - **Language:** en - **License:** mit - **Base model:** Qwen/Qwen3-8B - **Fine-tuning method:** LoRA with rank 128 ## Intended Use General conversation, project feedback and brainstorming. ## GGUF Quantized Versions Quantized GGUF versions are available in the [theprint/DevilsAdvocate-8B-GGUF](https://huggingface.co/theprint/DevilsAdvocate-8B-GGUF) repo. - `DevilsAdvocate-8B-f16.gguf` (15628.9 MB) - 16-bit float (original precision, largest file) - `DevilsAdvocate-8B-q3_k_m.gguf` (3933.1 MB) - 3-bit quantization (medium quality) - `DevilsAdvocate-8B-q4_k_m.gguf` (4794.9 MB) - 4-bit quantization (medium, recommended for most use cases) - `DevilsAdvocate-8B-q5_k_m.gguf` (5580.1 MB) - 5-bit quantization (medium, good quality) - `DevilsAdvocate-8B-q6_k.gguf` (6414.3 MB) - 6-bit quantization (high quality) - `DevilsAdvocate-8B-q8_0.gguf` (8306.0 MB) - 8-bit quantization (very high quality) ## Training Details ### Training Data The data set used is [theprint/Advocate-9.4k](https://huggingface.co/datasets/theprint/Advocate-9.4k). - **Dataset:** theprint/Advocate-9.4k - **Format:** alpaca ### Training Procedure - **Training epochs:** 2 - **LoRA rank:** 128 - **Learning rate:** 5e-05 - **Batch size:** 2 - **Framework:** Unsloth + transformers + PEFT - **Hardware:** NVIDIA RTX 5090 ## Usage ```python from unsloth import FastLanguageModel import torch # Load model and tokenizer model, tokenizer = FastLanguageModel.from_pretrained( model_name="theprint/DevilsAdvocate-8B", max_seq_length=4096, dtype=None, load_in_4bit=True, ) # Enable inference mode FastLanguageModel.for_inference(model) # Example usage inputs = tokenizer(["Your prompt here"], return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Alternative Usage (Standard Transformers) ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained( "theprint/DevilsAdvocate-8B", torch_dtype=torch.float16, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("theprint/DevilsAdvocate-8B") # Example usage messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Your question here"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) outputs = model.generate(inputs, max_new_tokens=256, temperature=0.7, do_sample=True) response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True) print(response) ``` ### Using with llama.cpp ```bash # Download a quantized version (q4_k_m recommended for most use cases) wget https://huggingface.co/theprint/DevilsAdvocate-8B/resolve/main/gguf/DevilsAdvocate-8B-q4_k_m.gguf # Run with llama.cpp ./llama.cpp/main -m DevilsAdvocate-8B-q4_k_m.gguf -p "Your prompt here" -n 256 ``` ## Limitations May provide incorrect information. ## Citation If you use this model, please cite: ```bibtex @misc{devilsadvocate_8b, title={DevilsAdvocate-8B: Fine-tuned Qwen/Qwen3-8B}, author={theprint}, year={2025}, publisher={Hugging Face}, url={https://huggingface.co/theprint/DevilsAdvocate-8B} } ``` ## Acknowledgments - Base model: [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) - Training dataset: [theprint/Advocate-9.4k](https://huggingface.co/datasets/theprint/Advocate-9.4k) - Fine-tuning framework: [Unsloth](https://github.com/unslothai/unsloth) - Quantization: [llama.cpp](https://github.com/ggerganov/llama.cpp)
Pdxsparky/Bitzparkin
Pdxsparky
2025-09-18T20:08:37Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-18T20:08:36Z
--- license: apache-2.0 ---
OxoGhost/a2c-PandaReachDense-v3
OxoGhost
2025-09-18T20:07:47Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-18T20:04:44Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.23 +/- 0.15 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
frank1900s/my-model-v1
frank1900s
2025-09-18T20:04:03Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-09-18T19:52:35Z
--- base_model: CompVis/stable-diffusion-v1-4 library_name: diffusers license: creativeml-openrail-m inference: true instance_prompt: a photo of sks dog tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - frank1900s/my-model-v1 This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
mirceahincu/distilbert-base-uncased-finetuned-emotion
mirceahincu
2025-09-18T20:03:30Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-08-23T08:41:49Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1259 - Accuracy: 0.9635 - F1: 0.9637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2042 | 1.0 | 250 | 0.1719 | 0.94 | 0.9409 | | 0.0748 | 2.0 | 500 | 0.1259 | 0.9635 | 0.9637 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
mradermacher/Alpha-Model-1.1-105B-GGUF
mradermacher
2025-09-18T20:00:22Z
1,960
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:bruhzair/Alpha-Model-1.1-105B", "base_model:quantized:bruhzair/Alpha-Model-1.1-105B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-17T05:34:04Z
--- base_model: bruhzair/Alpha-Model-1.1-105B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/bruhzair/Alpha-Model-1.1-105B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Alpha-Model-1.1-105B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q2_K.gguf) | Q2_K | 38.9 | | | [GGUF](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q3_K_S.gguf) | Q3_K_S | 45.5 | | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q3_K_M.gguf.part2of2) | Q3_K_M | 50.7 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q3_K_L.gguf.part2of2) | Q3_K_L | 55.2 | | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.IQ4_XS.gguf.part2of2) | IQ4_XS | 56.8 | | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q4_K_S.gguf.part2of2) | Q4_K_S | 59.8 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q4_K_M.gguf.part2of2) | Q4_K_M | 63.1 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q5_K_S.gguf.part2of2) | Q5_K_S | 72.3 | | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q5_K_M.gguf.part2of2) | Q5_K_M | 74.2 | | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q6_K.gguf.part2of2) | Q6_K | 86.1 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Alpha-Model-1.1-105B-GGUF/resolve/main/Alpha-Model-1.1-105B.Q8_0.gguf.part3of3) | Q8_0 | 111.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Nesslovver/Oral_insertion
Nesslovver
2025-09-18T19:59:05Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:lopi999/Wan2.2-I2V_General-NSFW-LoRA", "base_model:adapter:lopi999/Wan2.2-I2V_General-NSFW-LoRA", "region:us" ]
text-to-image
2025-09-18T19:58:36Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/11488.jpg text: '-' base_model: lopi999/Wan2.2-I2V_General-NSFW-LoRA instance_prompt: A man appears and she sucks his penis --- # Oral_insertion <Gallery /> ## Model description Oral insert ## Trigger words You should use `A man appears and she sucks his penis` to trigger the image generation. ## Download model [Download](/Nesslovver/Oral_insertion/tree/main) them in the Files & versions tab.
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758225311
schooncestiaa
2025-09-18T19:56:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T19:56:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chakra-labs/pango-7b-rl-grounding
chakra-labs
2025-09-18T19:52:21Z
6
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-to-text", "trl", "grpo", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-18T01:39:24Z
--- library_name: transformers tags: - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Osilly/Dynamic-LLaVA-TokenPacker-13B
Osilly
2025-09-18T19:52:05Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-18T19:52:05Z
--- license: apache-2.0 ---
Osilly/Dynamic-LLaVA-TokenPacker-7B
Osilly
2025-09-18T19:51:52Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-18T19:51:52Z
--- license: apache-2.0 ---
Zhaoxuan/PUGC-Mistral-DPO
Zhaoxuan
2025-09-18T19:50:27Z
0
0
null
[ "safetensors", "mistral", "license:apache-2.0", "region:us" ]
null
2025-09-18T19:43:10Z
--- license: apache-2.0 ---
qingy2024/HQRD-109M
qingy2024
2025-09-18T19:47:32Z
29
0
null
[ "safetensors", "bert", "en", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "region:us" ]
null
2025-09-17T01:37:31Z
--- license: apache-2.0 language: - en base_model: - google-bert/bert-base-uncased --- # HQRD 109M (fine tuned from bert-base-uncased) This is a 109M parameter model fine-tuned to detect high quality responses. It outputs a score ranging from 0 (bad) to 1 (good). However, occasionally it can output a value slightly outside of that range, such as 1.01 or -0.013 **Example Inference Code** ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer # Load the model and tokenizer from the Hugging Face Hub model_name = "qingy2024/HQRD-109M" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Example text to classify text = "Quantum mechanics is a fundamental branch of physics that describes the behavior of particles on very small scales, such as atoms and subatomic particles. It differs significantly from classical mechanics, which governs macroscopic objects, because it introduces concepts like wave-particle duality, uncertainty, and probabilistic outcomes." # Tokenize the text inputs = tokenizer(text, truncation=True, max_length=512, padding=True, return_tensors="pt") import torch # Perform inference with torch.no_grad(): # Disable gradient computation for inference outputs = model(**inputs) prediction = outputs.logits.item() # Extract the single float value # Interpret the result print(f"Prediction score: {prediction:.3f}") ```
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758224695
schooncestiaa
2025-09-18T19:46:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scruffy webbed dragonfly", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T19:45:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scruffy webbed dragonfly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mdouglas/granite-3.1-3b-a800m-base-bnb-4bit
mdouglas
2025-09-18T19:45:27Z
7
0
transformers
[ "transformers", "safetensors", "granitemoe", "text-generation", "en", "de", "es", "fr", "ja", "pt", "ar", "cs", "it", "ko", "nl", "zh", "base_model:ibm-granite/granite-3.1-3b-a800m-base", "base_model:quantized:ibm-granite/granite-3.1-3b-a800m-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-02T00:17:03Z
--- license: apache-2.0 base_model: - ibm-granite/granite-3.1-3b-a800m-base pipeline_tag: text-generation library_name: transformers language: - en - de - es - fr - ja - pt - ar - cs - it - ko - nl - zh --- > [!IMPORTANT] > This repository is an **experimental** quantized version of the original model [`ibm-granite/granite-3.1-3b-a800m-base`](https://huggingface.co/ibm-granite/granite-3.1-3b-a800m-base). > > It requires development versions of `transformers` and `bitsandbytes`. # Quantization The MLP expert parameters have been quantized in the NF4 format along with all `nn.Linear` modules except `lm_head` and `router` modules, using an experimental `bnb_4bit_target_parameters` configuration option. # Granite-3.1-3B-A800M-Base **Model Summary** Granite-3.1-3B-A800M-Base is a decoder-only language model to support a variety of text-to-text generation tasks. It extends the context length of Granite-3.0-3B-A800M-Base from 4K to 128K - **Developers:** Granite Team, IBM - **GitHub Repository:** [ibm-granite/granite-3.1-language-models](https://github.com/ibm-granite/granite-3.1-language-models) - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Paper:** [Granite 3.1 Language Models (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d) - **Release Date**: December 18th, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Model Architecture:** Granite-3.1-3B-A800M-Base is based on a decoder-only sparse Mixture of Experts (MoE) transformer architecture. Core components of this architecture are: Fine-grained Experts, Dropless Token Routing, and Load Balancing Loss. | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE | | :-------- | :--------| :--------| :--------| :-------- | | Embedding size | 2048 | 4096 | 1024 | **1536** | | Number of layers | 40 | 40 | 24 | **32** | | Attention head size | 64 | 128 | 64 | **64** | | Number of attention heads | 32 | 32 | 16 | **24** | | Number of KV heads | 8 | 8 | 8 | **8** | | MLP hidden size | 8192 | 12800 | 512 | **512** | | MLP activation | SwiGLU | SwiGLU | SwiGLU | **SwiGLU** | | Number of Experts | — | — | 32 | **40** | | MoE TopK | — | — | 8 | **8** | | Initialization std | 0.1 | 0.1 | 0.1 | **0.1** | | Sequence Length | 4096 | 4096 | 4096 | **4096** | | Position Embedding | RoPE | RoPE | RoPE | **RoPE** | | # Parameters | 2.5B | 8.1B | 1.3B | **3.3B** | | # Active Parameters | 2.5B | 8.1B | 400M | **800M** | | # Training tokens | 12T | 12T | 10T | **10T** |
ecamli/blockassist-bc-hulking_soft_hippo_1758224660
ecamli
2025-09-18T19:45:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-09-18T19:44:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-5Iterations-v3_2057
luckeciano
2025-09-18T19:44:25Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T16:37:26Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-DrGRPO-Base-Adam-5Iterations-v3_2057 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-DrGRPO-Base-Adam-5Iterations-v3_2057 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-DrGRPO-Base-Adam-5Iterations-v3_2057", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/ji2s6yym) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ConcaveTriangle/Magistral-2509-friends-tokenizer
ConcaveTriangle
2025-09-18T19:42:50Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-18T19:42:50Z
--- license: apache-2.0 ---
leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S-test
leonMW
2025-09-18T19:40:47Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-17T12:40:12Z
--- library_name: transformers model_name: DeepSeek-R1-Distill-Qwen-1.5B-S-test tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for DeepSeek-R1-Distill-Qwen-1.5B-S-test This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="leonMW/DeepSeek-R1-Distill-Qwen-1.5B-S-test", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/leonwenderoth-tu-darmstadt/huggingface/runs/snvrwsh3) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.7.1 - Datasets: 4.1.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
alan-smith/llama-3.1-8B-disambiguation-16bit-all-tasks-vllm
alan-smith
2025-09-18T19:39:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T19:13:13Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** alan-smith - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
puneetpanwar/act_sim_cubepickup_il
puneetpanwar
2025-09-18T19:38:10Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:puneetpanwar/sim_cubepickup_il", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-09-18T19:36:47Z
--- datasets: puneetpanwar/sim_cubepickup_il library_name: lerobot license: apache-2.0 model_name: act pipeline_tag: robotics tags: - robotics - act - lerobot --- # Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0